00:00:00.000 Started by upstream project "autotest-per-patch" build number 132781 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.072 The recommended git tool is: git 00:00:00.073 using credential 00000000-0000-0000-0000-000000000002 00:00:00.074 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.146 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.216 Using shallow fetch with depth 1 00:00:00.216 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.216 > git --version # timeout=10 00:00:00.283 > git --version # 'git version 2.39.2' 00:00:00.283 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.335 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.336 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.027 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.039 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.053 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:11.053 > git config core.sparsecheckout # timeout=10 00:00:11.067 > git read-tree -mu HEAD # timeout=10 00:00:11.085 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:11.108 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:11.108 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:11.192 [Pipeline] Start of Pipeline 00:00:11.202 [Pipeline] library 00:00:11.203 Loading library shm_lib@master 00:00:11.203 Library shm_lib@master is cached. Copying from home. 00:00:11.215 [Pipeline] node 00:15:31.317 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:15:31.319 [Pipeline] { 00:15:31.330 [Pipeline] catchError 00:15:31.332 [Pipeline] { 00:15:31.346 [Pipeline] wrap 00:15:31.356 [Pipeline] { 00:15:31.365 [Pipeline] stage 00:15:31.367 [Pipeline] { (Prologue) 00:15:31.585 [Pipeline] sh 00:15:31.871 + logger -p user.info -t JENKINS-CI 00:15:31.894 [Pipeline] echo 00:15:31.896 Node: WFP8 00:15:31.904 [Pipeline] sh 00:15:32.203 [Pipeline] setCustomBuildProperty 00:15:32.215 [Pipeline] echo 00:15:32.216 Cleanup processes 00:15:32.222 [Pipeline] sh 00:15:32.506 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:32.506 385889 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:32.520 [Pipeline] sh 00:15:32.804 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:32.804 ++ grep -v 'sudo pgrep' 00:15:32.804 ++ awk '{print $1}' 00:15:32.804 + sudo kill -9 00:15:32.804 + true 00:15:32.819 [Pipeline] cleanWs 00:15:32.828 [WS-CLEANUP] Deleting project workspace... 00:15:32.828 [WS-CLEANUP] Deferred wipeout is used... 00:15:32.834 [WS-CLEANUP] done 00:15:32.839 [Pipeline] setCustomBuildProperty 00:15:32.854 [Pipeline] sh 00:15:33.137 + sudo git config --global --replace-all safe.directory '*' 00:15:33.352 [Pipeline] httpRequest 00:15:33.775 [Pipeline] echo 00:15:33.777 Sorcerer 10.211.164.101 is alive 00:15:33.786 [Pipeline] retry 00:15:33.789 [Pipeline] { 00:15:33.803 [Pipeline] httpRequest 00:15:33.807 HttpMethod: GET 00:15:33.808 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:15:33.808 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:15:33.811 Response Code: HTTP/1.1 200 OK 00:15:33.812 Success: Status code 200 is in the accepted range: 200,404 00:15:33.812 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:15:33.959 [Pipeline] } 00:15:33.976 [Pipeline] // retry 00:15:33.984 [Pipeline] sh 00:15:34.268 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:15:34.284 [Pipeline] httpRequest 00:15:34.639 [Pipeline] echo 00:15:34.641 Sorcerer 10.211.164.101 is alive 00:15:34.650 [Pipeline] retry 00:15:34.651 [Pipeline] { 00:15:34.666 [Pipeline] httpRequest 00:15:34.670 HttpMethod: GET 00:15:34.670 URL: http://10.211.164.101/packages/spdk_b920049a10f61ff10a17de17284f589f8629ea45.tar.gz 00:15:34.671 Sending request to url: http://10.211.164.101/packages/spdk_b920049a10f61ff10a17de17284f589f8629ea45.tar.gz 00:15:34.674 Response Code: HTTP/1.1 200 OK 00:15:34.674 Success: Status code 200 is in the accepted range: 200,404 00:15:34.674 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b920049a10f61ff10a17de17284f589f8629ea45.tar.gz 00:15:36.944 [Pipeline] } 00:15:36.961 [Pipeline] // retry 00:15:36.970 [Pipeline] sh 00:15:37.254 + tar --no-same-owner -xf spdk_b920049a10f61ff10a17de17284f589f8629ea45.tar.gz 00:15:39.801 [Pipeline] sh 00:15:40.084 + git -C spdk log --oneline -n5 00:15:40.084 b920049a1 env: use 4-KiB memory mapping in no-huge mode 00:15:40.084 070cd5283 env: extend the page table to support 4-KiB mapping 00:15:40.084 6c714c5fe env: add mem_map_fini and vtophys_fini for cleanup 00:15:40.084 b7d7c4b24 env: handle possible DPDK errors in mem_map_init 00:15:40.084 b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:15:40.095 [Pipeline] } 00:15:40.110 [Pipeline] // stage 00:15:40.121 [Pipeline] stage 00:15:40.123 [Pipeline] { (Prepare) 00:15:40.141 [Pipeline] writeFile 00:15:40.162 [Pipeline] sh 00:15:40.449 + logger -p user.info -t JENKINS-CI 00:15:40.462 [Pipeline] sh 00:15:40.747 + logger -p user.info -t JENKINS-CI 00:15:40.759 [Pipeline] sh 00:15:41.044 + cat autorun-spdk.conf 00:15:41.044 SPDK_RUN_FUNCTIONAL_TEST=1 00:15:41.044 SPDK_TEST_NVMF=1 00:15:41.044 SPDK_TEST_NVME_CLI=1 00:15:41.044 SPDK_TEST_NVMF_TRANSPORT=tcp 00:15:41.044 SPDK_TEST_NVMF_NICS=e810 00:15:41.044 SPDK_TEST_VFIOUSER=1 00:15:41.044 SPDK_RUN_UBSAN=1 00:15:41.044 NET_TYPE=phy 00:15:41.052 RUN_NIGHTLY=0 00:15:41.057 [Pipeline] readFile 00:15:41.080 [Pipeline] withEnv 00:15:41.082 [Pipeline] { 00:15:41.095 [Pipeline] sh 00:15:41.382 + set -ex 00:15:41.382 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:15:41.382 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:15:41.382 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:41.382 ++ SPDK_TEST_NVMF=1 00:15:41.382 ++ SPDK_TEST_NVME_CLI=1 00:15:41.382 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:15:41.382 ++ SPDK_TEST_NVMF_NICS=e810 00:15:41.382 ++ SPDK_TEST_VFIOUSER=1 00:15:41.382 ++ SPDK_RUN_UBSAN=1 00:15:41.382 ++ NET_TYPE=phy 00:15:41.382 ++ RUN_NIGHTLY=0 00:15:41.382 + case $SPDK_TEST_NVMF_NICS in 00:15:41.382 + DRIVERS=ice 00:15:41.382 + [[ tcp == \r\d\m\a ]] 00:15:41.382 + [[ -n ice ]] 00:15:41.382 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:15:41.382 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:15:41.382 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:15:41.382 rmmod: ERROR: Module irdma is not currently loaded 00:15:41.382 rmmod: ERROR: Module i40iw is not currently loaded 00:15:41.382 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:15:41.382 + true 00:15:41.382 + for D in $DRIVERS 00:15:41.382 + sudo modprobe ice 00:15:41.382 + exit 0 00:15:41.391 [Pipeline] } 00:15:41.407 [Pipeline] // withEnv 00:15:41.411 [Pipeline] } 00:15:41.423 [Pipeline] // stage 00:15:41.432 [Pipeline] catchError 00:15:41.434 [Pipeline] { 00:15:41.446 [Pipeline] timeout 00:15:41.446 Timeout set to expire in 1 hr 0 min 00:15:41.448 [Pipeline] { 00:15:41.461 [Pipeline] stage 00:15:41.463 [Pipeline] { (Tests) 00:15:41.476 [Pipeline] sh 00:15:41.763 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:15:41.763 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:15:41.763 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:15:41.763 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:15:41.763 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:41.763 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:15:41.763 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:15:41.763 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:15:41.763 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:15:41.763 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:15:41.763 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:15:41.763 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:15:41.763 + source /etc/os-release 00:15:41.763 ++ NAME='Fedora Linux' 00:15:41.763 ++ VERSION='39 (Cloud Edition)' 00:15:41.763 ++ ID=fedora 00:15:41.763 ++ VERSION_ID=39 00:15:41.763 ++ VERSION_CODENAME= 00:15:41.763 ++ PLATFORM_ID=platform:f39 00:15:41.763 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:15:41.763 ++ ANSI_COLOR='0;38;2;60;110;180' 00:15:41.763 ++ LOGO=fedora-logo-icon 00:15:41.763 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:15:41.763 ++ HOME_URL=https://fedoraproject.org/ 00:15:41.763 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:15:41.763 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:15:41.763 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:15:41.763 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:15:41.763 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:15:41.763 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:15:41.763 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:15:41.763 ++ SUPPORT_END=2024-11-12 00:15:41.763 ++ VARIANT='Cloud Edition' 00:15:41.763 ++ VARIANT_ID=cloud 00:15:41.763 + uname -a 00:15:41.763 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:15:41.763 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:15:43.673 Hugepages 00:15:43.673 node hugesize free / total 00:15:43.673 node0 1048576kB 0 / 0 00:15:43.673 node0 2048kB 0 / 0 00:15:43.673 node1 1048576kB 0 / 0 00:15:43.673 node1 2048kB 0 / 0 00:15:43.673 00:15:43.673 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:43.673 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:15:43.673 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:15:43.673 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:15:43.673 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:15:43.673 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:15:43.673 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:15:43.673 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:15:43.673 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:15:43.934 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:15:43.934 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:15:43.934 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:15:43.934 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:15:43.934 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:15:43.934 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:15:43.934 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:15:43.934 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:15:43.934 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:15:43.934 + rm -f /tmp/spdk-ld-path 00:15:43.934 + source autorun-spdk.conf 00:15:43.934 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:43.934 ++ SPDK_TEST_NVMF=1 00:15:43.934 ++ SPDK_TEST_NVME_CLI=1 00:15:43.934 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:15:43.934 ++ SPDK_TEST_NVMF_NICS=e810 00:15:43.934 ++ SPDK_TEST_VFIOUSER=1 00:15:43.934 ++ SPDK_RUN_UBSAN=1 00:15:43.934 ++ NET_TYPE=phy 00:15:43.934 ++ RUN_NIGHTLY=0 00:15:43.934 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:15:43.934 + [[ -n '' ]] 00:15:43.934 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:43.934 + for M in /var/spdk/build-*-manifest.txt 00:15:43.934 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:15:43.934 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:15:43.934 + for M in /var/spdk/build-*-manifest.txt 00:15:43.934 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:15:43.934 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:15:43.934 + for M in /var/spdk/build-*-manifest.txt 00:15:43.934 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:15:43.934 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:15:43.934 ++ uname 00:15:43.934 + [[ Linux == \L\i\n\u\x ]] 00:15:43.934 + sudo dmesg -T 00:15:43.934 + sudo dmesg --clear 00:15:43.934 + dmesg_pid=386813 00:15:43.934 + [[ Fedora Linux == FreeBSD ]] 00:15:43.934 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:43.934 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:43.934 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:15:43.934 + [[ -x /usr/src/fio-static/fio ]] 00:15:43.934 + export FIO_BIN=/usr/src/fio-static/fio 00:15:43.934 + FIO_BIN=/usr/src/fio-static/fio 00:15:43.934 + sudo dmesg -Tw 00:15:43.934 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:15:43.934 + [[ ! -v VFIO_QEMU_BIN ]] 00:15:43.934 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:15:43.934 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:43.934 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:43.934 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:15:43.934 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:43.934 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:43.934 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:15:43.934 10:26:45 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:15:43.934 10:26:45 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:15:43.934 10:26:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:43.934 10:26:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:15:43.934 10:26:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:15:43.934 10:26:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:15:43.934 10:26:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:15:43.934 10:26:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:15:43.934 10:26:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:15:43.934 10:26:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:15:43.934 10:26:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:15:43.934 10:26:45 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:15:43.934 10:26:45 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:15:44.193 10:26:45 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:15:44.193 10:26:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.193 10:26:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:15:44.193 10:26:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:15:44.193 10:26:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.193 10:26:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.194 10:26:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.194 10:26:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.194 10:26:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.194 10:26:45 -- paths/export.sh@5 -- $ export PATH 00:15:44.194 10:26:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.194 10:26:45 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:15:44.194 10:26:45 -- common/autobuild_common.sh@493 -- $ date +%s 00:15:44.194 10:26:45 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733736405.XXXXXX 00:15:44.194 10:26:45 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733736405.Sn525O 00:15:44.194 10:26:45 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:15:44.194 10:26:45 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:15:44.194 10:26:45 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:15:44.194 10:26:45 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:15:44.194 10:26:45 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:15:44.194 10:26:45 -- common/autobuild_common.sh@509 -- $ get_config_params 00:15:44.194 10:26:45 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:15:44.194 10:26:45 -- common/autotest_common.sh@10 -- $ set +x 00:15:44.194 10:26:45 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:15:44.194 10:26:45 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:15:44.194 10:26:45 -- pm/common@17 -- $ local monitor 00:15:44.194 10:26:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:44.194 10:26:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:44.194 10:26:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:44.194 10:26:45 -- pm/common@21 -- $ date +%s 00:15:44.194 10:26:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:44.194 10:26:45 -- pm/common@21 -- $ date +%s 00:15:44.194 10:26:45 -- pm/common@25 -- $ sleep 1 00:15:44.194 10:26:45 -- pm/common@21 -- $ date +%s 00:15:44.194 10:26:45 -- pm/common@21 -- $ date +%s 00:15:44.194 10:26:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733736405 00:15:44.194 10:26:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733736405 00:15:44.194 10:26:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733736405 00:15:44.194 10:26:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733736405 00:15:44.194 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733736405_collect-vmstat.pm.log 00:15:44.194 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733736405_collect-cpu-load.pm.log 00:15:44.194 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733736405_collect-cpu-temp.pm.log 00:15:44.194 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733736405_collect-bmc-pm.bmc.pm.log 00:15:45.133 10:26:46 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:15:45.133 10:26:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:15:45.133 10:26:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:15:45.133 10:26:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:45.133 10:26:46 -- spdk/autobuild.sh@16 -- $ date -u 00:15:45.133 Mon Dec 9 09:26:46 AM UTC 2024 00:15:45.133 10:26:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:15:45.133 v25.01-pre-317-gb920049a1 00:15:45.133 10:26:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:15:45.133 10:26:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:15:45.133 10:26:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:15:45.133 10:26:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:15:45.133 10:26:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:15:45.133 10:26:46 -- common/autotest_common.sh@10 -- $ set +x 00:15:45.133 ************************************ 00:15:45.133 START TEST ubsan 00:15:45.133 ************************************ 00:15:45.133 10:26:46 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:15:45.133 using ubsan 00:15:45.133 00:15:45.133 real 0m0.000s 00:15:45.133 user 0m0.000s 00:15:45.133 sys 0m0.000s 00:15:45.133 10:26:46 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:15:45.133 10:26:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:15:45.133 ************************************ 00:15:45.133 END TEST ubsan 00:15:45.133 ************************************ 00:15:45.133 10:26:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:15:45.133 10:26:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:15:45.133 10:26:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:15:45.133 10:26:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:15:45.133 10:26:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:15:45.133 10:26:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:15:45.133 10:26:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:15:45.133 10:26:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:15:45.133 10:26:46 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:15:45.392 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:45.392 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:45.652 Using 'verbs' RDMA provider 00:15:58.461 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:16:10.680 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:16:10.680 Creating mk/config.mk...done. 00:16:10.680 Creating mk/cc.flags.mk...done. 00:16:10.680 Type 'make' to build. 00:16:10.680 10:27:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:16:10.680 10:27:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:16:10.680 10:27:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:16:10.680 10:27:10 -- common/autotest_common.sh@10 -- $ set +x 00:16:10.680 ************************************ 00:16:10.680 START TEST make 00:16:10.680 ************************************ 00:16:10.680 10:27:10 make -- common/autotest_common.sh@1129 -- $ make -j96 00:16:10.680 make[1]: Nothing to be done for 'all'. 00:16:10.943 The Meson build system 00:16:10.943 Version: 1.5.0 00:16:10.943 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:16:10.943 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:16:10.943 Build type: native build 00:16:10.943 Project name: libvfio-user 00:16:10.943 Project version: 0.0.1 00:16:10.943 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:16:10.943 C linker for the host machine: cc ld.bfd 2.40-14 00:16:10.943 Host machine cpu family: x86_64 00:16:10.943 Host machine cpu: x86_64 00:16:10.943 Run-time dependency threads found: YES 00:16:10.943 Library dl found: YES 00:16:10.943 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:16:10.943 Run-time dependency json-c found: YES 0.17 00:16:10.943 Run-time dependency cmocka found: YES 1.1.7 00:16:10.943 Program pytest-3 found: NO 00:16:10.943 Program flake8 found: NO 00:16:10.943 Program misspell-fixer found: NO 00:16:10.943 Program restructuredtext-lint found: NO 00:16:10.943 Program valgrind found: YES (/usr/bin/valgrind) 00:16:10.943 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:16:10.943 Compiler for C supports arguments -Wmissing-declarations: YES 00:16:10.943 Compiler for C supports arguments -Wwrite-strings: YES 00:16:10.943 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:16:10.943 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:16:10.943 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:16:10.943 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:16:10.943 Build targets in project: 8 00:16:10.943 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:16:10.943 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:16:10.943 00:16:10.943 libvfio-user 0.0.1 00:16:10.943 00:16:10.943 User defined options 00:16:10.943 buildtype : debug 00:16:10.943 default_library: shared 00:16:10.943 libdir : /usr/local/lib 00:16:10.943 00:16:10.943 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:16:11.511 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:16:11.511 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:16:11.771 [2/37] Compiling C object samples/null.p/null.c.o 00:16:11.771 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:16:11.771 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:16:11.771 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:16:11.771 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:16:11.771 [7/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:16:11.771 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:16:11.771 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:16:11.771 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:16:11.771 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:16:11.771 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:16:11.771 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:16:11.771 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:16:11.771 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:16:11.771 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:16:11.771 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:16:11.771 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:16:11.771 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:16:11.771 [20/37] Compiling C object samples/server.p/server.c.o 00:16:11.771 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:16:11.771 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:16:11.771 [23/37] Compiling C object samples/client.p/client.c.o 00:16:11.771 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:16:11.771 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:16:11.771 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:16:11.771 [27/37] Linking target samples/client 00:16:11.771 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:16:11.771 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:16:11.771 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:16:11.771 [31/37] Linking target test/unit_tests 00:16:12.031 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:16:12.031 [33/37] Linking target samples/null 00:16:12.031 [34/37] Linking target samples/server 00:16:12.031 [35/37] Linking target samples/gpio-pci-idio-16 00:16:12.031 [36/37] Linking target samples/lspci 00:16:12.031 [37/37] Linking target samples/shadow_ioeventfd_server 00:16:12.031 INFO: autodetecting backend as ninja 00:16:12.031 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:16:12.031 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:16:12.599 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:16:12.599 ninja: no work to do. 00:16:17.873 The Meson build system 00:16:17.873 Version: 1.5.0 00:16:17.873 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:16:17.873 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:16:17.873 Build type: native build 00:16:17.873 Program cat found: YES (/usr/bin/cat) 00:16:17.873 Project name: DPDK 00:16:17.873 Project version: 24.03.0 00:16:17.873 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:16:17.873 C linker for the host machine: cc ld.bfd 2.40-14 00:16:17.873 Host machine cpu family: x86_64 00:16:17.873 Host machine cpu: x86_64 00:16:17.873 Message: ## Building in Developer Mode ## 00:16:17.873 Program pkg-config found: YES (/usr/bin/pkg-config) 00:16:17.873 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:16:17.873 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:16:17.873 Program python3 found: YES (/usr/bin/python3) 00:16:17.873 Program cat found: YES (/usr/bin/cat) 00:16:17.873 Compiler for C supports arguments -march=native: YES 00:16:17.873 Checking for size of "void *" : 8 00:16:17.873 Checking for size of "void *" : 8 (cached) 00:16:17.873 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:16:17.873 Library m found: YES 00:16:17.873 Library numa found: YES 00:16:17.873 Has header "numaif.h" : YES 00:16:17.873 Library fdt found: NO 00:16:17.873 Library execinfo found: NO 00:16:17.873 Has header "execinfo.h" : YES 00:16:17.873 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:16:17.873 Run-time dependency libarchive found: NO (tried pkgconfig) 00:16:17.873 Run-time dependency libbsd found: NO (tried pkgconfig) 00:16:17.873 Run-time dependency jansson found: NO (tried pkgconfig) 00:16:17.873 Run-time dependency openssl found: YES 3.1.1 00:16:17.873 Run-time dependency libpcap found: YES 1.10.4 00:16:17.873 Has header "pcap.h" with dependency libpcap: YES 00:16:17.873 Compiler for C supports arguments -Wcast-qual: YES 00:16:17.873 Compiler for C supports arguments -Wdeprecated: YES 00:16:17.873 Compiler for C supports arguments -Wformat: YES 00:16:17.873 Compiler for C supports arguments -Wformat-nonliteral: NO 00:16:17.873 Compiler for C supports arguments -Wformat-security: NO 00:16:17.873 Compiler for C supports arguments -Wmissing-declarations: YES 00:16:17.873 Compiler for C supports arguments -Wmissing-prototypes: YES 00:16:17.873 Compiler for C supports arguments -Wnested-externs: YES 00:16:17.873 Compiler for C supports arguments -Wold-style-definition: YES 00:16:17.873 Compiler for C supports arguments -Wpointer-arith: YES 00:16:17.873 Compiler for C supports arguments -Wsign-compare: YES 00:16:17.873 Compiler for C supports arguments -Wstrict-prototypes: YES 00:16:17.873 Compiler for C supports arguments -Wundef: YES 00:16:17.873 Compiler for C supports arguments -Wwrite-strings: YES 00:16:17.873 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:16:17.873 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:16:17.873 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:16:17.873 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:16:17.873 Program objdump found: YES (/usr/bin/objdump) 00:16:17.873 Compiler for C supports arguments -mavx512f: YES 00:16:17.873 Checking if "AVX512 checking" compiles: YES 00:16:17.873 Fetching value of define "__SSE4_2__" : 1 00:16:17.873 Fetching value of define "__AES__" : 1 00:16:17.873 Fetching value of define "__AVX__" : 1 00:16:17.873 Fetching value of define "__AVX2__" : 1 00:16:17.873 Fetching value of define "__AVX512BW__" : 1 00:16:17.873 Fetching value of define "__AVX512CD__" : 1 00:16:17.873 Fetching value of define "__AVX512DQ__" : 1 00:16:17.873 Fetching value of define "__AVX512F__" : 1 00:16:17.873 Fetching value of define "__AVX512VL__" : 1 00:16:17.873 Fetching value of define "__PCLMUL__" : 1 00:16:17.873 Fetching value of define "__RDRND__" : 1 00:16:17.873 Fetching value of define "__RDSEED__" : 1 00:16:17.874 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:16:17.874 Fetching value of define "__znver1__" : (undefined) 00:16:17.874 Fetching value of define "__znver2__" : (undefined) 00:16:17.874 Fetching value of define "__znver3__" : (undefined) 00:16:17.874 Fetching value of define "__znver4__" : (undefined) 00:16:17.874 Compiler for C supports arguments -Wno-format-truncation: YES 00:16:17.874 Message: lib/log: Defining dependency "log" 00:16:17.874 Message: lib/kvargs: Defining dependency "kvargs" 00:16:17.874 Message: lib/telemetry: Defining dependency "telemetry" 00:16:17.874 Checking for function "getentropy" : NO 00:16:17.874 Message: lib/eal: Defining dependency "eal" 00:16:17.874 Message: lib/ring: Defining dependency "ring" 00:16:17.874 Message: lib/rcu: Defining dependency "rcu" 00:16:17.874 Message: lib/mempool: Defining dependency "mempool" 00:16:17.874 Message: lib/mbuf: Defining dependency "mbuf" 00:16:17.874 Fetching value of define "__PCLMUL__" : 1 (cached) 00:16:17.874 Fetching value of define "__AVX512F__" : 1 (cached) 00:16:17.874 Fetching value of define "__AVX512BW__" : 1 (cached) 00:16:17.874 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:16:17.874 Fetching value of define "__AVX512VL__" : 1 (cached) 00:16:17.874 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:16:17.874 Compiler for C supports arguments -mpclmul: YES 00:16:17.874 Compiler for C supports arguments -maes: YES 00:16:17.874 Compiler for C supports arguments -mavx512f: YES (cached) 00:16:17.874 Compiler for C supports arguments -mavx512bw: YES 00:16:17.874 Compiler for C supports arguments -mavx512dq: YES 00:16:17.874 Compiler for C supports arguments -mavx512vl: YES 00:16:17.874 Compiler for C supports arguments -mvpclmulqdq: YES 00:16:17.874 Compiler for C supports arguments -mavx2: YES 00:16:17.874 Compiler for C supports arguments -mavx: YES 00:16:17.874 Message: lib/net: Defining dependency "net" 00:16:17.874 Message: lib/meter: Defining dependency "meter" 00:16:17.874 Message: lib/ethdev: Defining dependency "ethdev" 00:16:17.874 Message: lib/pci: Defining dependency "pci" 00:16:17.874 Message: lib/cmdline: Defining dependency "cmdline" 00:16:17.874 Message: lib/hash: Defining dependency "hash" 00:16:17.874 Message: lib/timer: Defining dependency "timer" 00:16:17.874 Message: lib/compressdev: Defining dependency "compressdev" 00:16:17.874 Message: lib/cryptodev: Defining dependency "cryptodev" 00:16:17.874 Message: lib/dmadev: Defining dependency "dmadev" 00:16:17.874 Compiler for C supports arguments -Wno-cast-qual: YES 00:16:17.874 Message: lib/power: Defining dependency "power" 00:16:17.874 Message: lib/reorder: Defining dependency "reorder" 00:16:17.874 Message: lib/security: Defining dependency "security" 00:16:17.874 Has header "linux/userfaultfd.h" : YES 00:16:17.874 Has header "linux/vduse.h" : YES 00:16:17.874 Message: lib/vhost: Defining dependency "vhost" 00:16:17.874 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:16:17.874 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:16:17.874 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:16:17.874 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:16:17.874 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:16:17.874 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:16:17.874 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:16:17.874 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:16:17.874 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:16:17.874 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:16:17.874 Program doxygen found: YES (/usr/local/bin/doxygen) 00:16:17.874 Configuring doxy-api-html.conf using configuration 00:16:17.874 Configuring doxy-api-man.conf using configuration 00:16:17.874 Program mandb found: YES (/usr/bin/mandb) 00:16:17.874 Program sphinx-build found: NO 00:16:17.874 Configuring rte_build_config.h using configuration 00:16:17.874 Message: 00:16:17.874 ================= 00:16:17.874 Applications Enabled 00:16:17.874 ================= 00:16:17.874 00:16:17.874 apps: 00:16:17.874 00:16:17.874 00:16:17.874 Message: 00:16:17.874 ================= 00:16:17.874 Libraries Enabled 00:16:17.874 ================= 00:16:17.874 00:16:17.874 libs: 00:16:17.874 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:16:17.874 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:16:17.874 cryptodev, dmadev, power, reorder, security, vhost, 00:16:17.874 00:16:17.874 Message: 00:16:17.874 =============== 00:16:17.874 Drivers Enabled 00:16:17.874 =============== 00:16:17.874 00:16:17.874 common: 00:16:17.874 00:16:17.874 bus: 00:16:17.874 pci, vdev, 00:16:17.874 mempool: 00:16:17.874 ring, 00:16:17.874 dma: 00:16:17.874 00:16:17.874 net: 00:16:17.874 00:16:17.874 crypto: 00:16:17.874 00:16:17.874 compress: 00:16:17.874 00:16:17.874 vdpa: 00:16:17.874 00:16:17.874 00:16:17.874 Message: 00:16:17.874 ================= 00:16:17.874 Content Skipped 00:16:17.874 ================= 00:16:17.874 00:16:17.874 apps: 00:16:17.874 dumpcap: explicitly disabled via build config 00:16:17.874 graph: explicitly disabled via build config 00:16:17.874 pdump: explicitly disabled via build config 00:16:17.874 proc-info: explicitly disabled via build config 00:16:17.874 test-acl: explicitly disabled via build config 00:16:17.874 test-bbdev: explicitly disabled via build config 00:16:17.874 test-cmdline: explicitly disabled via build config 00:16:17.874 test-compress-perf: explicitly disabled via build config 00:16:17.874 test-crypto-perf: explicitly disabled via build config 00:16:17.874 test-dma-perf: explicitly disabled via build config 00:16:17.874 test-eventdev: explicitly disabled via build config 00:16:17.874 test-fib: explicitly disabled via build config 00:16:17.874 test-flow-perf: explicitly disabled via build config 00:16:17.874 test-gpudev: explicitly disabled via build config 00:16:17.874 test-mldev: explicitly disabled via build config 00:16:17.874 test-pipeline: explicitly disabled via build config 00:16:17.874 test-pmd: explicitly disabled via build config 00:16:17.874 test-regex: explicitly disabled via build config 00:16:17.874 test-sad: explicitly disabled via build config 00:16:17.874 test-security-perf: explicitly disabled via build config 00:16:17.874 00:16:17.874 libs: 00:16:17.874 argparse: explicitly disabled via build config 00:16:17.874 metrics: explicitly disabled via build config 00:16:17.874 acl: explicitly disabled via build config 00:16:17.874 bbdev: explicitly disabled via build config 00:16:17.874 bitratestats: explicitly disabled via build config 00:16:17.874 bpf: explicitly disabled via build config 00:16:17.874 cfgfile: explicitly disabled via build config 00:16:17.874 distributor: explicitly disabled via build config 00:16:17.874 efd: explicitly disabled via build config 00:16:17.874 eventdev: explicitly disabled via build config 00:16:17.874 dispatcher: explicitly disabled via build config 00:16:17.874 gpudev: explicitly disabled via build config 00:16:17.874 gro: explicitly disabled via build config 00:16:17.874 gso: explicitly disabled via build config 00:16:17.874 ip_frag: explicitly disabled via build config 00:16:17.874 jobstats: explicitly disabled via build config 00:16:17.874 latencystats: explicitly disabled via build config 00:16:17.874 lpm: explicitly disabled via build config 00:16:17.874 member: explicitly disabled via build config 00:16:17.874 pcapng: explicitly disabled via build config 00:16:17.874 rawdev: explicitly disabled via build config 00:16:17.874 regexdev: explicitly disabled via build config 00:16:17.874 mldev: explicitly disabled via build config 00:16:17.874 rib: explicitly disabled via build config 00:16:17.874 sched: explicitly disabled via build config 00:16:17.874 stack: explicitly disabled via build config 00:16:17.874 ipsec: explicitly disabled via build config 00:16:17.874 pdcp: explicitly disabled via build config 00:16:17.874 fib: explicitly disabled via build config 00:16:17.874 port: explicitly disabled via build config 00:16:17.874 pdump: explicitly disabled via build config 00:16:17.874 table: explicitly disabled via build config 00:16:17.874 pipeline: explicitly disabled via build config 00:16:17.874 graph: explicitly disabled via build config 00:16:17.874 node: explicitly disabled via build config 00:16:17.874 00:16:17.874 drivers: 00:16:17.874 common/cpt: not in enabled drivers build config 00:16:17.874 common/dpaax: not in enabled drivers build config 00:16:17.874 common/iavf: not in enabled drivers build config 00:16:17.874 common/idpf: not in enabled drivers build config 00:16:17.874 common/ionic: not in enabled drivers build config 00:16:17.874 common/mvep: not in enabled drivers build config 00:16:17.874 common/octeontx: not in enabled drivers build config 00:16:17.874 bus/auxiliary: not in enabled drivers build config 00:16:17.874 bus/cdx: not in enabled drivers build config 00:16:17.874 bus/dpaa: not in enabled drivers build config 00:16:17.874 bus/fslmc: not in enabled drivers build config 00:16:17.874 bus/ifpga: not in enabled drivers build config 00:16:17.874 bus/platform: not in enabled drivers build config 00:16:17.874 bus/uacce: not in enabled drivers build config 00:16:17.874 bus/vmbus: not in enabled drivers build config 00:16:17.874 common/cnxk: not in enabled drivers build config 00:16:17.874 common/mlx5: not in enabled drivers build config 00:16:17.874 common/nfp: not in enabled drivers build config 00:16:17.874 common/nitrox: not in enabled drivers build config 00:16:17.874 common/qat: not in enabled drivers build config 00:16:17.874 common/sfc_efx: not in enabled drivers build config 00:16:17.874 mempool/bucket: not in enabled drivers build config 00:16:17.874 mempool/cnxk: not in enabled drivers build config 00:16:17.874 mempool/dpaa: not in enabled drivers build config 00:16:17.874 mempool/dpaa2: not in enabled drivers build config 00:16:17.874 mempool/octeontx: not in enabled drivers build config 00:16:17.874 mempool/stack: not in enabled drivers build config 00:16:17.875 dma/cnxk: not in enabled drivers build config 00:16:17.875 dma/dpaa: not in enabled drivers build config 00:16:17.875 dma/dpaa2: not in enabled drivers build config 00:16:17.875 dma/hisilicon: not in enabled drivers build config 00:16:17.875 dma/idxd: not in enabled drivers build config 00:16:17.875 dma/ioat: not in enabled drivers build config 00:16:17.875 dma/skeleton: not in enabled drivers build config 00:16:17.875 net/af_packet: not in enabled drivers build config 00:16:17.875 net/af_xdp: not in enabled drivers build config 00:16:17.875 net/ark: not in enabled drivers build config 00:16:17.875 net/atlantic: not in enabled drivers build config 00:16:17.875 net/avp: not in enabled drivers build config 00:16:17.875 net/axgbe: not in enabled drivers build config 00:16:17.875 net/bnx2x: not in enabled drivers build config 00:16:17.875 net/bnxt: not in enabled drivers build config 00:16:17.875 net/bonding: not in enabled drivers build config 00:16:17.875 net/cnxk: not in enabled drivers build config 00:16:17.875 net/cpfl: not in enabled drivers build config 00:16:17.875 net/cxgbe: not in enabled drivers build config 00:16:17.875 net/dpaa: not in enabled drivers build config 00:16:17.875 net/dpaa2: not in enabled drivers build config 00:16:17.875 net/e1000: not in enabled drivers build config 00:16:17.875 net/ena: not in enabled drivers build config 00:16:17.875 net/enetc: not in enabled drivers build config 00:16:17.875 net/enetfec: not in enabled drivers build config 00:16:17.875 net/enic: not in enabled drivers build config 00:16:17.875 net/failsafe: not in enabled drivers build config 00:16:17.875 net/fm10k: not in enabled drivers build config 00:16:17.875 net/gve: not in enabled drivers build config 00:16:17.875 net/hinic: not in enabled drivers build config 00:16:17.875 net/hns3: not in enabled drivers build config 00:16:17.875 net/i40e: not in enabled drivers build config 00:16:17.875 net/iavf: not in enabled drivers build config 00:16:17.875 net/ice: not in enabled drivers build config 00:16:17.875 net/idpf: not in enabled drivers build config 00:16:17.875 net/igc: not in enabled drivers build config 00:16:17.875 net/ionic: not in enabled drivers build config 00:16:17.875 net/ipn3ke: not in enabled drivers build config 00:16:17.875 net/ixgbe: not in enabled drivers build config 00:16:17.875 net/mana: not in enabled drivers build config 00:16:17.875 net/memif: not in enabled drivers build config 00:16:17.875 net/mlx4: not in enabled drivers build config 00:16:17.875 net/mlx5: not in enabled drivers build config 00:16:17.875 net/mvneta: not in enabled drivers build config 00:16:17.875 net/mvpp2: not in enabled drivers build config 00:16:17.875 net/netvsc: not in enabled drivers build config 00:16:17.875 net/nfb: not in enabled drivers build config 00:16:17.875 net/nfp: not in enabled drivers build config 00:16:17.875 net/ngbe: not in enabled drivers build config 00:16:17.875 net/null: not in enabled drivers build config 00:16:17.875 net/octeontx: not in enabled drivers build config 00:16:17.875 net/octeon_ep: not in enabled drivers build config 00:16:17.875 net/pcap: not in enabled drivers build config 00:16:17.875 net/pfe: not in enabled drivers build config 00:16:17.875 net/qede: not in enabled drivers build config 00:16:17.875 net/ring: not in enabled drivers build config 00:16:17.875 net/sfc: not in enabled drivers build config 00:16:17.875 net/softnic: not in enabled drivers build config 00:16:17.875 net/tap: not in enabled drivers build config 00:16:17.875 net/thunderx: not in enabled drivers build config 00:16:17.875 net/txgbe: not in enabled drivers build config 00:16:17.875 net/vdev_netvsc: not in enabled drivers build config 00:16:17.875 net/vhost: not in enabled drivers build config 00:16:17.875 net/virtio: not in enabled drivers build config 00:16:17.875 net/vmxnet3: not in enabled drivers build config 00:16:17.875 raw/*: missing internal dependency, "rawdev" 00:16:17.875 crypto/armv8: not in enabled drivers build config 00:16:17.875 crypto/bcmfs: not in enabled drivers build config 00:16:17.875 crypto/caam_jr: not in enabled drivers build config 00:16:17.875 crypto/ccp: not in enabled drivers build config 00:16:17.875 crypto/cnxk: not in enabled drivers build config 00:16:17.875 crypto/dpaa_sec: not in enabled drivers build config 00:16:17.875 crypto/dpaa2_sec: not in enabled drivers build config 00:16:17.875 crypto/ipsec_mb: not in enabled drivers build config 00:16:17.875 crypto/mlx5: not in enabled drivers build config 00:16:17.875 crypto/mvsam: not in enabled drivers build config 00:16:17.875 crypto/nitrox: not in enabled drivers build config 00:16:17.875 crypto/null: not in enabled drivers build config 00:16:17.875 crypto/octeontx: not in enabled drivers build config 00:16:17.875 crypto/openssl: not in enabled drivers build config 00:16:17.875 crypto/scheduler: not in enabled drivers build config 00:16:17.875 crypto/uadk: not in enabled drivers build config 00:16:17.875 crypto/virtio: not in enabled drivers build config 00:16:17.875 compress/isal: not in enabled drivers build config 00:16:17.875 compress/mlx5: not in enabled drivers build config 00:16:17.875 compress/nitrox: not in enabled drivers build config 00:16:17.875 compress/octeontx: not in enabled drivers build config 00:16:17.875 compress/zlib: not in enabled drivers build config 00:16:17.875 regex/*: missing internal dependency, "regexdev" 00:16:17.875 ml/*: missing internal dependency, "mldev" 00:16:17.875 vdpa/ifc: not in enabled drivers build config 00:16:17.875 vdpa/mlx5: not in enabled drivers build config 00:16:17.875 vdpa/nfp: not in enabled drivers build config 00:16:17.875 vdpa/sfc: not in enabled drivers build config 00:16:17.875 event/*: missing internal dependency, "eventdev" 00:16:17.875 baseband/*: missing internal dependency, "bbdev" 00:16:17.875 gpu/*: missing internal dependency, "gpudev" 00:16:17.875 00:16:17.875 00:16:17.875 Build targets in project: 85 00:16:17.875 00:16:17.875 DPDK 24.03.0 00:16:17.875 00:16:17.875 User defined options 00:16:17.875 buildtype : debug 00:16:17.875 default_library : shared 00:16:17.875 libdir : lib 00:16:17.875 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:16:17.875 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:16:17.875 c_link_args : 00:16:17.875 cpu_instruction_set: native 00:16:17.875 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:16:17.875 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:16:17.875 enable_docs : false 00:16:17.875 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:16:17.875 enable_kmods : false 00:16:17.875 max_lcores : 128 00:16:17.875 tests : false 00:16:17.875 00:16:17.875 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:16:18.139 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:16:18.139 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:16:18.139 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:16:18.139 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:16:18.139 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:16:18.139 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:16:18.139 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:16:18.139 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:16:18.411 [8/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:16:18.411 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:16:18.411 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:16:18.411 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:16:18.411 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:16:18.411 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:16:18.411 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:16:18.411 [15/268] Linking static target lib/librte_kvargs.a 00:16:18.411 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:16:18.411 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:16:18.412 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:16:18.412 [19/268] Linking static target lib/librte_log.a 00:16:18.412 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:16:18.412 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:16:18.412 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:16:18.412 [23/268] Linking static target lib/librte_pci.a 00:16:18.670 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:16:18.670 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:16:18.670 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:16:18.670 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:16:18.670 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:16:18.670 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:16:18.670 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:16:18.670 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:16:18.670 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:16:18.670 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:16:18.670 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:16:18.670 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:16:18.670 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:16:18.670 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:16:18.670 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:16:18.670 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:16:18.670 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:16:18.670 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:16:18.670 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:16:18.670 [43/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:16:18.670 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:16:18.670 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:16:18.670 [46/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:16:18.670 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:16:18.670 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:16:18.670 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:16:18.670 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:16:18.670 [51/268] Linking static target lib/librte_meter.a 00:16:18.670 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:16:18.670 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:16:18.670 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:16:18.670 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:16:18.670 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:16:18.670 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:16:18.670 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:16:18.670 [59/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:16:18.670 [60/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:16:18.670 [61/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:16:18.670 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:16:18.670 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:16:18.670 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:16:18.670 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:16:18.670 [66/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:16:18.670 [67/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:16:18.670 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:16:18.931 [69/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:16:18.931 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:16:18.931 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:16:18.931 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:16:18.931 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:16:18.931 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:16:18.931 [75/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:16:18.931 [76/268] Linking static target lib/librte_ring.a 00:16:18.931 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:16:18.931 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:16:18.931 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:16:18.931 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:16:18.931 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:16:18.931 [82/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:16:18.931 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:16:18.931 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:16:18.931 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:16:18.931 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:16:18.931 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:16:18.931 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:16:18.931 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:16:18.931 [90/268] Linking static target lib/librte_telemetry.a 00:16:18.931 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:16:18.931 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:16:18.931 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:16:18.931 [94/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:16:18.931 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:16:18.931 [96/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:16:18.931 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:16:18.931 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:16:18.931 [99/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:16:18.931 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:16:18.931 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:16:18.931 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:16:18.931 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:16:18.931 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:16:18.931 [105/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:16:18.931 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:16:18.931 [107/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:16:18.931 [108/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:16:18.931 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:16:18.931 [110/268] Linking static target lib/librte_net.a 00:16:18.931 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:16:18.931 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:16:18.931 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:16:18.931 [114/268] Linking static target lib/librte_rcu.a 00:16:18.931 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:16:18.931 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:16:18.931 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:16:18.931 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:16:18.931 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:16:18.931 [120/268] Linking static target lib/librte_eal.a 00:16:18.931 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:16:18.931 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:16:18.931 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:16:18.932 [124/268] Linking static target lib/librte_mempool.a 00:16:18.932 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:16:18.932 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:16:18.932 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:16:18.932 [128/268] Linking static target lib/librte_cmdline.a 00:16:18.932 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:16:18.932 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:16:18.932 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:16:18.932 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:16:19.190 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:16:19.190 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.190 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:16:19.190 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:16:19.190 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:16:19.190 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.190 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.190 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:16:19.190 [141/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:16:19.190 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:16:19.190 [143/268] Linking static target lib/librte_mbuf.a 00:16:19.190 [144/268] Linking target lib/librte_log.so.24.1 00:16:19.190 [145/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:16:19.190 [146/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.190 [147/268] Linking static target lib/librte_timer.a 00:16:19.190 [148/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:16:19.190 [149/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:16:19.190 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:16:19.190 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:16:19.190 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:16:19.190 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:16:19.190 [154/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.190 [155/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:16:19.190 [156/268] Linking static target lib/librte_compressdev.a 00:16:19.190 [157/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:16:19.190 [158/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:16:19.190 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:16:19.190 [160/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:16:19.190 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:16:19.190 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:16:19.190 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:16:19.190 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:16:19.190 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:16:19.190 [166/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.190 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:16:19.190 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:16:19.448 [169/268] Linking target lib/librte_kvargs.so.24.1 00:16:19.448 [170/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:16:19.448 [171/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:16:19.448 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:16:19.448 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:16:19.448 [174/268] Linking static target lib/librte_reorder.a 00:16:19.448 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:16:19.449 [176/268] Linking target lib/librte_telemetry.so.24.1 00:16:19.449 [177/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:16:19.449 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:16:19.449 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:16:19.449 [180/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:16:19.449 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:16:19.449 [182/268] Linking static target lib/librte_power.a 00:16:19.449 [183/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:16:19.449 [184/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:16:19.449 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:16:19.449 [186/268] Linking static target lib/librte_dmadev.a 00:16:19.449 [187/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:16:19.449 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:16:19.449 [189/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:16:19.449 [190/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:16:19.449 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:16:19.449 [192/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:16:19.449 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:16:19.449 [194/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:16:19.449 [195/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:16:19.449 [196/268] Linking static target drivers/librte_bus_vdev.a 00:16:19.449 [197/268] Linking static target lib/librte_hash.a 00:16:19.449 [198/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:16:19.449 [199/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:16:19.449 [200/268] Linking static target lib/librte_security.a 00:16:19.449 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:16:19.449 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:16:19.707 [203/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.707 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:16:19.707 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:16:19.707 [206/268] Linking static target drivers/librte_mempool_ring.a 00:16:19.707 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:16:19.707 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:16:19.707 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:16:19.707 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:16:19.707 [211/268] Linking static target drivers/librte_bus_pci.a 00:16:19.707 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.707 [213/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.707 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:16:19.707 [215/268] Linking static target lib/librte_cryptodev.a 00:16:19.966 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.966 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.966 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:16:19.966 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:19.966 [220/268] Linking static target lib/librte_ethdev.a 00:16:19.966 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:20.224 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:16:20.224 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:16:20.224 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:16:20.224 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:16:20.483 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:16:20.483 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:16:21.423 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:16:21.423 [229/268] Linking static target lib/librte_vhost.a 00:16:21.682 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:23.061 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:16:28.332 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:28.591 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:16:28.591 [234/268] Linking target lib/librte_eal.so.24.1 00:16:28.850 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:16:28.850 [236/268] Linking target lib/librte_pci.so.24.1 00:16:28.850 [237/268] Linking target lib/librte_ring.so.24.1 00:16:28.850 [238/268] Linking target lib/librte_dmadev.so.24.1 00:16:28.850 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:16:28.850 [240/268] Linking target lib/librte_timer.so.24.1 00:16:28.850 [241/268] Linking target lib/librte_meter.so.24.1 00:16:28.850 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:16:28.850 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:16:28.850 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:16:28.850 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:16:28.850 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:16:28.850 [247/268] Linking target lib/librte_rcu.so.24.1 00:16:28.850 [248/268] Linking target lib/librte_mempool.so.24.1 00:16:28.850 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:16:29.109 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:16:29.109 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:16:29.109 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:16:29.109 [253/268] Linking target lib/librte_mbuf.so.24.1 00:16:29.367 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:16:29.367 [255/268] Linking target lib/librte_compressdev.so.24.1 00:16:29.367 [256/268] Linking target lib/librte_net.so.24.1 00:16:29.367 [257/268] Linking target lib/librte_reorder.so.24.1 00:16:29.367 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:16:29.367 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:16:29.367 [260/268] Linking target lib/librte_cmdline.so.24.1 00:16:29.367 [261/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:16:29.367 [262/268] Linking target lib/librte_hash.so.24.1 00:16:29.367 [263/268] Linking target lib/librte_ethdev.so.24.1 00:16:29.367 [264/268] Linking target lib/librte_security.so.24.1 00:16:29.628 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:16:29.628 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:16:29.628 [267/268] Linking target lib/librte_power.so.24.1 00:16:29.628 [268/268] Linking target lib/librte_vhost.so.24.1 00:16:29.628 INFO: autodetecting backend as ninja 00:16:29.628 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:16:41.852 CC lib/ut_mock/mock.o 00:16:41.852 CC lib/log/log.o 00:16:41.852 CC lib/log/log_flags.o 00:16:41.852 CC lib/log/log_deprecated.o 00:16:41.852 CC lib/ut/ut.o 00:16:41.852 LIB libspdk_ut_mock.a 00:16:41.852 LIB libspdk_log.a 00:16:41.852 LIB libspdk_ut.a 00:16:41.852 SO libspdk_ut_mock.so.6.0 00:16:41.852 SO libspdk_log.so.7.1 00:16:41.852 SO libspdk_ut.so.2.0 00:16:41.852 SYMLINK libspdk_ut_mock.so 00:16:41.852 SYMLINK libspdk_log.so 00:16:41.852 SYMLINK libspdk_ut.so 00:16:41.852 CC lib/dma/dma.o 00:16:41.852 CXX lib/trace_parser/trace.o 00:16:41.852 CC lib/util/base64.o 00:16:41.852 CC lib/util/bit_array.o 00:16:41.852 CC lib/util/cpuset.o 00:16:41.852 CC lib/util/crc16.o 00:16:41.852 CC lib/util/crc32.o 00:16:41.852 CC lib/util/crc32c.o 00:16:41.852 CC lib/util/fd.o 00:16:41.852 CC lib/util/crc32_ieee.o 00:16:41.852 CC lib/util/crc64.o 00:16:41.852 CC lib/util/fd_group.o 00:16:41.852 CC lib/util/dif.o 00:16:41.852 CC lib/util/file.o 00:16:41.852 CC lib/util/hexlify.o 00:16:41.852 CC lib/util/iov.o 00:16:41.852 CC lib/util/math.o 00:16:41.852 CC lib/util/net.o 00:16:41.852 CC lib/util/pipe.o 00:16:41.852 CC lib/util/strerror_tls.o 00:16:41.852 CC lib/util/string.o 00:16:41.852 CC lib/util/xor.o 00:16:41.852 CC lib/util/uuid.o 00:16:41.852 CC lib/util/zipf.o 00:16:41.852 CC lib/util/md5.o 00:16:41.852 CC lib/ioat/ioat.o 00:16:41.852 CC lib/vfio_user/host/vfio_user_pci.o 00:16:41.852 CC lib/vfio_user/host/vfio_user.o 00:16:41.852 LIB libspdk_dma.a 00:16:41.852 SO libspdk_dma.so.5.0 00:16:41.852 LIB libspdk_ioat.a 00:16:41.852 SYMLINK libspdk_dma.so 00:16:41.852 SO libspdk_ioat.so.7.0 00:16:41.852 SYMLINK libspdk_ioat.so 00:16:41.852 LIB libspdk_vfio_user.a 00:16:41.852 SO libspdk_vfio_user.so.5.0 00:16:41.852 SYMLINK libspdk_vfio_user.so 00:16:41.852 LIB libspdk_util.a 00:16:41.852 SO libspdk_util.so.10.1 00:16:41.852 SYMLINK libspdk_util.so 00:16:41.852 LIB libspdk_trace_parser.a 00:16:42.112 SO libspdk_trace_parser.so.6.0 00:16:42.112 SYMLINK libspdk_trace_parser.so 00:16:42.112 CC lib/conf/conf.o 00:16:42.112 CC lib/vmd/led.o 00:16:42.112 CC lib/vmd/vmd.o 00:16:42.112 CC lib/json/json_parse.o 00:16:42.112 CC lib/idxd/idxd.o 00:16:42.112 CC lib/env_dpdk/env.o 00:16:42.112 CC lib/json/json_util.o 00:16:42.112 CC lib/json/json_write.o 00:16:42.112 CC lib/idxd/idxd_user.o 00:16:42.112 CC lib/env_dpdk/memory.o 00:16:42.112 CC lib/idxd/idxd_kernel.o 00:16:42.112 CC lib/env_dpdk/pci.o 00:16:42.370 CC lib/rdma_utils/rdma_utils.o 00:16:42.370 CC lib/env_dpdk/init.o 00:16:42.370 CC lib/env_dpdk/threads.o 00:16:42.370 CC lib/env_dpdk/pci_ioat.o 00:16:42.370 CC lib/env_dpdk/pci_virtio.o 00:16:42.370 CC lib/env_dpdk/pci_vmd.o 00:16:42.370 CC lib/env_dpdk/pci_idxd.o 00:16:42.370 CC lib/env_dpdk/pci_event.o 00:16:42.370 CC lib/env_dpdk/sigbus_handler.o 00:16:42.370 CC lib/env_dpdk/pci_dpdk.o 00:16:42.370 CC lib/env_dpdk/pci_dpdk_2207.o 00:16:42.370 CC lib/env_dpdk/pci_dpdk_2211.o 00:16:42.370 LIB libspdk_conf.a 00:16:42.370 SO libspdk_conf.so.6.0 00:16:42.370 SYMLINK libspdk_conf.so 00:16:42.629 LIB libspdk_rdma_utils.a 00:16:42.629 LIB libspdk_json.a 00:16:42.629 SO libspdk_rdma_utils.so.1.0 00:16:42.629 SO libspdk_json.so.6.0 00:16:42.629 SYMLINK libspdk_rdma_utils.so 00:16:42.629 SYMLINK libspdk_json.so 00:16:42.629 LIB libspdk_idxd.a 00:16:42.629 LIB libspdk_vmd.a 00:16:42.629 SO libspdk_idxd.so.12.1 00:16:42.888 SO libspdk_vmd.so.6.0 00:16:42.888 SYMLINK libspdk_idxd.so 00:16:42.888 SYMLINK libspdk_vmd.so 00:16:42.888 CC lib/rdma_provider/rdma_provider_verbs.o 00:16:42.888 CC lib/rdma_provider/common.o 00:16:42.888 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:16:42.888 CC lib/jsonrpc/jsonrpc_server.o 00:16:42.888 CC lib/jsonrpc/jsonrpc_client.o 00:16:42.888 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:16:43.146 LIB libspdk_rdma_provider.a 00:16:43.146 LIB libspdk_jsonrpc.a 00:16:43.146 SO libspdk_rdma_provider.so.7.0 00:16:43.146 SO libspdk_jsonrpc.so.6.0 00:16:43.146 SYMLINK libspdk_rdma_provider.so 00:16:43.146 SYMLINK libspdk_jsonrpc.so 00:16:43.146 LIB libspdk_env_dpdk.a 00:16:43.404 SO libspdk_env_dpdk.so.15.1 00:16:43.404 SYMLINK libspdk_env_dpdk.so 00:16:43.404 CC lib/rpc/rpc.o 00:16:43.662 LIB libspdk_rpc.a 00:16:43.663 SO libspdk_rpc.so.6.0 00:16:43.663 SYMLINK libspdk_rpc.so 00:16:43.920 CC lib/trace/trace.o 00:16:43.920 CC lib/trace/trace_flags.o 00:16:43.920 CC lib/trace/trace_rpc.o 00:16:44.178 CC lib/notify/notify.o 00:16:44.178 CC lib/notify/notify_rpc.o 00:16:44.178 CC lib/keyring/keyring.o 00:16:44.178 CC lib/keyring/keyring_rpc.o 00:16:44.178 LIB libspdk_notify.a 00:16:44.178 SO libspdk_notify.so.6.0 00:16:44.178 LIB libspdk_trace.a 00:16:44.178 LIB libspdk_keyring.a 00:16:44.178 SO libspdk_trace.so.11.0 00:16:44.436 SYMLINK libspdk_notify.so 00:16:44.436 SO libspdk_keyring.so.2.0 00:16:44.436 SYMLINK libspdk_trace.so 00:16:44.436 SYMLINK libspdk_keyring.so 00:16:44.694 CC lib/sock/sock.o 00:16:44.694 CC lib/sock/sock_rpc.o 00:16:44.694 CC lib/thread/thread.o 00:16:44.694 CC lib/thread/iobuf.o 00:16:44.952 LIB libspdk_sock.a 00:16:44.952 SO libspdk_sock.so.10.0 00:16:44.952 SYMLINK libspdk_sock.so 00:16:45.517 CC lib/nvme/nvme_ctrlr_cmd.o 00:16:45.517 CC lib/nvme/nvme_ctrlr.o 00:16:45.517 CC lib/nvme/nvme_fabric.o 00:16:45.517 CC lib/nvme/nvme_ns_cmd.o 00:16:45.517 CC lib/nvme/nvme_ns.o 00:16:45.517 CC lib/nvme/nvme_pcie_common.o 00:16:45.517 CC lib/nvme/nvme_pcie.o 00:16:45.517 CC lib/nvme/nvme_qpair.o 00:16:45.517 CC lib/nvme/nvme.o 00:16:45.517 CC lib/nvme/nvme_quirks.o 00:16:45.517 CC lib/nvme/nvme_transport.o 00:16:45.517 CC lib/nvme/nvme_discovery.o 00:16:45.517 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:16:45.517 CC lib/nvme/nvme_opal.o 00:16:45.517 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:16:45.517 CC lib/nvme/nvme_io_msg.o 00:16:45.517 CC lib/nvme/nvme_poll_group.o 00:16:45.517 CC lib/nvme/nvme_tcp.o 00:16:45.517 CC lib/nvme/nvme_stubs.o 00:16:45.517 CC lib/nvme/nvme_zns.o 00:16:45.517 CC lib/nvme/nvme_cuse.o 00:16:45.517 CC lib/nvme/nvme_auth.o 00:16:45.517 CC lib/nvme/nvme_vfio_user.o 00:16:45.517 CC lib/nvme/nvme_rdma.o 00:16:45.774 LIB libspdk_thread.a 00:16:45.774 SO libspdk_thread.so.11.0 00:16:45.774 SYMLINK libspdk_thread.so 00:16:46.031 CC lib/virtio/virtio.o 00:16:46.031 CC lib/virtio/virtio_vhost_user.o 00:16:46.031 CC lib/virtio/virtio_vfio_user.o 00:16:46.031 CC lib/fsdev/fsdev.o 00:16:46.031 CC lib/virtio/virtio_pci.o 00:16:46.031 CC lib/fsdev/fsdev_rpc.o 00:16:46.031 CC lib/fsdev/fsdev_io.o 00:16:46.031 CC lib/accel/accel.o 00:16:46.031 CC lib/accel/accel_rpc.o 00:16:46.031 CC lib/accel/accel_sw.o 00:16:46.031 CC lib/init/json_config.o 00:16:46.031 CC lib/init/subsystem.o 00:16:46.031 CC lib/blob/blobstore.o 00:16:46.031 CC lib/init/subsystem_rpc.o 00:16:46.031 CC lib/blob/request.o 00:16:46.031 CC lib/init/rpc.o 00:16:46.031 CC lib/blob/zeroes.o 00:16:46.031 CC lib/blob/blob_bs_dev.o 00:16:46.031 CC lib/vfu_tgt/tgt_endpoint.o 00:16:46.031 CC lib/vfu_tgt/tgt_rpc.o 00:16:46.289 LIB libspdk_init.a 00:16:46.289 LIB libspdk_virtio.a 00:16:46.289 SO libspdk_init.so.6.0 00:16:46.289 SO libspdk_virtio.so.7.0 00:16:46.289 LIB libspdk_vfu_tgt.a 00:16:46.547 SO libspdk_vfu_tgt.so.3.0 00:16:46.547 SYMLINK libspdk_init.so 00:16:46.548 SYMLINK libspdk_virtio.so 00:16:46.548 SYMLINK libspdk_vfu_tgt.so 00:16:46.548 LIB libspdk_fsdev.a 00:16:46.548 SO libspdk_fsdev.so.2.0 00:16:46.806 SYMLINK libspdk_fsdev.so 00:16:46.806 CC lib/event/app.o 00:16:46.806 CC lib/event/reactor.o 00:16:46.806 CC lib/event/log_rpc.o 00:16:46.806 CC lib/event/app_rpc.o 00:16:46.806 CC lib/event/scheduler_static.o 00:16:46.806 LIB libspdk_accel.a 00:16:46.806 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:16:47.065 SO libspdk_accel.so.16.0 00:16:47.065 SYMLINK libspdk_accel.so 00:16:47.065 LIB libspdk_nvme.a 00:16:47.065 LIB libspdk_event.a 00:16:47.065 SO libspdk_event.so.14.0 00:16:47.065 SO libspdk_nvme.so.15.0 00:16:47.324 SYMLINK libspdk_event.so 00:16:47.324 CC lib/bdev/bdev.o 00:16:47.324 CC lib/bdev/bdev_rpc.o 00:16:47.324 CC lib/bdev/bdev_zone.o 00:16:47.324 CC lib/bdev/part.o 00:16:47.324 CC lib/bdev/scsi_nvme.o 00:16:47.324 SYMLINK libspdk_nvme.so 00:16:47.324 LIB libspdk_fuse_dispatcher.a 00:16:47.324 SO libspdk_fuse_dispatcher.so.1.0 00:16:47.583 SYMLINK libspdk_fuse_dispatcher.so 00:16:48.521 LIB libspdk_blob.a 00:16:48.521 SO libspdk_blob.so.12.0 00:16:48.521 SYMLINK libspdk_blob.so 00:16:48.780 CC lib/lvol/lvol.o 00:16:48.780 CC lib/blobfs/blobfs.o 00:16:48.780 CC lib/blobfs/tree.o 00:16:49.039 LIB libspdk_bdev.a 00:16:49.298 SO libspdk_bdev.so.17.0 00:16:49.298 LIB libspdk_blobfs.a 00:16:49.298 SYMLINK libspdk_bdev.so 00:16:49.298 SO libspdk_blobfs.so.11.0 00:16:49.298 LIB libspdk_lvol.a 00:16:49.298 SYMLINK libspdk_blobfs.so 00:16:49.298 SO libspdk_lvol.so.11.0 00:16:49.557 SYMLINK libspdk_lvol.so 00:16:49.557 CC lib/nvmf/ctrlr.o 00:16:49.557 CC lib/nvmf/ctrlr_bdev.o 00:16:49.557 CC lib/nvmf/ctrlr_discovery.o 00:16:49.557 CC lib/nvmf/nvmf.o 00:16:49.557 CC lib/nvmf/subsystem.o 00:16:49.557 CC lib/nvmf/nvmf_rpc.o 00:16:49.557 CC lib/nvmf/transport.o 00:16:49.557 CC lib/nvmf/tcp.o 00:16:49.557 CC lib/nvmf/stubs.o 00:16:49.557 CC lib/nvmf/vfio_user.o 00:16:49.557 CC lib/nvmf/mdns_server.o 00:16:49.557 CC lib/nvmf/rdma.o 00:16:49.557 CC lib/nvmf/auth.o 00:16:49.557 CC lib/ftl/ftl_core.o 00:16:49.557 CC lib/ftl/ftl_init.o 00:16:49.557 CC lib/ftl/ftl_layout.o 00:16:49.557 CC lib/ftl/ftl_debug.o 00:16:49.557 CC lib/ftl/ftl_io.o 00:16:49.557 CC lib/ftl/ftl_l2p.o 00:16:49.557 CC lib/ftl/ftl_sb.o 00:16:49.557 CC lib/ftl/ftl_nv_cache.o 00:16:49.557 CC lib/ftl/ftl_l2p_flat.o 00:16:49.557 CC lib/ublk/ublk.o 00:16:49.557 CC lib/ftl/ftl_band.o 00:16:49.557 CC lib/ublk/ublk_rpc.o 00:16:49.557 CC lib/ftl/ftl_writer.o 00:16:49.557 CC lib/ftl/ftl_band_ops.o 00:16:49.557 CC lib/ftl/ftl_rq.o 00:16:49.557 CC lib/ftl/ftl_reloc.o 00:16:49.557 CC lib/ftl/ftl_p2l.o 00:16:49.557 CC lib/ftl/ftl_p2l_log.o 00:16:49.557 CC lib/ftl/ftl_l2p_cache.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:16:49.557 CC lib/scsi/dev.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:16:49.557 CC lib/scsi/port.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_md.o 00:16:49.557 CC lib/scsi/lun.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_startup.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:16:49.557 CC lib/scsi/scsi_bdev.o 00:16:49.557 CC lib/scsi/scsi.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_misc.o 00:16:49.557 CC lib/scsi/scsi_pr.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:16:49.557 CC lib/scsi/scsi_rpc.o 00:16:49.557 CC lib/nbd/nbd.o 00:16:49.557 CC lib/scsi/task.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:16:49.557 CC lib/nbd/nbd_rpc.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_band.o 00:16:49.557 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:16:49.557 CC lib/ftl/utils/ftl_conf.o 00:16:49.557 CC lib/ftl/utils/ftl_mempool.o 00:16:49.557 CC lib/ftl/utils/ftl_md.o 00:16:49.557 CC lib/ftl/utils/ftl_bitmap.o 00:16:49.557 CC lib/ftl/utils/ftl_property.o 00:16:49.557 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:16:49.557 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:16:49.557 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:16:49.557 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:16:49.557 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:16:49.557 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:16:49.557 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:16:49.557 CC lib/ftl/upgrade/ftl_sb_v5.o 00:16:49.557 CC lib/ftl/upgrade/ftl_sb_v3.o 00:16:49.557 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:16:49.557 CC lib/ftl/nvc/ftl_nvc_dev.o 00:16:49.557 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:16:49.557 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:16:49.557 CC lib/ftl/base/ftl_base_bdev.o 00:16:49.557 CC lib/ftl/ftl_trace.o 00:16:49.557 CC lib/ftl/base/ftl_base_dev.o 00:16:50.124 LIB libspdk_nbd.a 00:16:50.124 SO libspdk_nbd.so.7.0 00:16:50.381 LIB libspdk_scsi.a 00:16:50.381 LIB libspdk_ublk.a 00:16:50.381 SO libspdk_scsi.so.9.0 00:16:50.381 SYMLINK libspdk_nbd.so 00:16:50.381 SO libspdk_ublk.so.3.0 00:16:50.381 SYMLINK libspdk_scsi.so 00:16:50.381 SYMLINK libspdk_ublk.so 00:16:50.381 LIB libspdk_ftl.a 00:16:50.639 CC lib/vhost/vhost_rpc.o 00:16:50.639 CC lib/vhost/vhost.o 00:16:50.639 CC lib/vhost/vhost_blk.o 00:16:50.639 SO libspdk_ftl.so.9.0 00:16:50.639 CC lib/vhost/vhost_scsi.o 00:16:50.639 CC lib/vhost/rte_vhost_user.o 00:16:50.639 CC lib/iscsi/conn.o 00:16:50.639 CC lib/iscsi/init_grp.o 00:16:50.639 CC lib/iscsi/iscsi.o 00:16:50.639 CC lib/iscsi/param.o 00:16:50.639 CC lib/iscsi/portal_grp.o 00:16:50.639 CC lib/iscsi/tgt_node.o 00:16:50.639 CC lib/iscsi/iscsi_subsystem.o 00:16:50.640 CC lib/iscsi/iscsi_rpc.o 00:16:50.640 CC lib/iscsi/task.o 00:16:50.897 SYMLINK libspdk_ftl.so 00:16:51.463 LIB libspdk_nvmf.a 00:16:51.463 SO libspdk_nvmf.so.20.0 00:16:51.463 LIB libspdk_vhost.a 00:16:51.463 SO libspdk_vhost.so.8.0 00:16:51.463 SYMLINK libspdk_nvmf.so 00:16:51.463 SYMLINK libspdk_vhost.so 00:16:51.721 LIB libspdk_iscsi.a 00:16:51.721 SO libspdk_iscsi.so.8.0 00:16:51.979 SYMLINK libspdk_iscsi.so 00:16:52.237 CC module/env_dpdk/env_dpdk_rpc.o 00:16:52.237 CC module/vfu_device/vfu_virtio.o 00:16:52.237 CC module/vfu_device/vfu_virtio_blk.o 00:16:52.237 CC module/vfu_device/vfu_virtio_rpc.o 00:16:52.237 CC module/vfu_device/vfu_virtio_scsi.o 00:16:52.237 CC module/vfu_device/vfu_virtio_fs.o 00:16:52.495 CC module/accel/dsa/accel_dsa.o 00:16:52.495 CC module/accel/dsa/accel_dsa_rpc.o 00:16:52.495 CC module/scheduler/gscheduler/gscheduler.o 00:16:52.495 CC module/accel/error/accel_error.o 00:16:52.495 CC module/accel/ioat/accel_ioat_rpc.o 00:16:52.495 CC module/accel/ioat/accel_ioat.o 00:16:52.495 CC module/accel/error/accel_error_rpc.o 00:16:52.495 CC module/blob/bdev/blob_bdev.o 00:16:52.495 CC module/accel/iaa/accel_iaa.o 00:16:52.495 CC module/fsdev/aio/fsdev_aio_rpc.o 00:16:52.495 CC module/fsdev/aio/fsdev_aio.o 00:16:52.495 CC module/accel/iaa/accel_iaa_rpc.o 00:16:52.495 CC module/fsdev/aio/linux_aio_mgr.o 00:16:52.495 CC module/scheduler/dynamic/scheduler_dynamic.o 00:16:52.495 LIB libspdk_env_dpdk_rpc.a 00:16:52.495 CC module/sock/posix/posix.o 00:16:52.495 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:16:52.495 CC module/keyring/linux/keyring.o 00:16:52.495 CC module/keyring/linux/keyring_rpc.o 00:16:52.495 CC module/keyring/file/keyring.o 00:16:52.495 CC module/keyring/file/keyring_rpc.o 00:16:52.495 SO libspdk_env_dpdk_rpc.so.6.0 00:16:52.495 SYMLINK libspdk_env_dpdk_rpc.so 00:16:52.495 LIB libspdk_scheduler_gscheduler.a 00:16:52.495 LIB libspdk_keyring_linux.a 00:16:52.495 LIB libspdk_scheduler_dpdk_governor.a 00:16:52.495 LIB libspdk_accel_ioat.a 00:16:52.753 SO libspdk_scheduler_gscheduler.so.4.0 00:16:52.753 LIB libspdk_keyring_file.a 00:16:52.753 SO libspdk_scheduler_dpdk_governor.so.4.0 00:16:52.753 LIB libspdk_scheduler_dynamic.a 00:16:52.753 SO libspdk_keyring_linux.so.1.0 00:16:52.753 LIB libspdk_accel_iaa.a 00:16:52.753 SO libspdk_accel_ioat.so.6.0 00:16:52.753 SO libspdk_keyring_file.so.2.0 00:16:52.753 LIB libspdk_accel_error.a 00:16:52.753 SYMLINK libspdk_scheduler_gscheduler.so 00:16:52.753 SO libspdk_scheduler_dynamic.so.4.0 00:16:52.753 SO libspdk_accel_iaa.so.3.0 00:16:52.753 LIB libspdk_accel_dsa.a 00:16:52.753 SO libspdk_accel_error.so.2.0 00:16:52.753 SYMLINK libspdk_scheduler_dpdk_governor.so 00:16:52.753 SYMLINK libspdk_accel_ioat.so 00:16:52.753 SYMLINK libspdk_keyring_linux.so 00:16:52.753 LIB libspdk_blob_bdev.a 00:16:52.753 SYMLINK libspdk_keyring_file.so 00:16:52.753 SO libspdk_accel_dsa.so.5.0 00:16:52.753 SYMLINK libspdk_scheduler_dynamic.so 00:16:52.753 SO libspdk_blob_bdev.so.12.0 00:16:52.753 SYMLINK libspdk_accel_iaa.so 00:16:52.753 SYMLINK libspdk_accel_error.so 00:16:52.753 SYMLINK libspdk_accel_dsa.so 00:16:52.753 SYMLINK libspdk_blob_bdev.so 00:16:52.753 LIB libspdk_vfu_device.a 00:16:52.753 SO libspdk_vfu_device.so.3.0 00:16:53.011 SYMLINK libspdk_vfu_device.so 00:16:53.011 LIB libspdk_fsdev_aio.a 00:16:53.011 LIB libspdk_sock_posix.a 00:16:53.011 SO libspdk_fsdev_aio.so.1.0 00:16:53.011 SO libspdk_sock_posix.so.6.0 00:16:53.011 SYMLINK libspdk_fsdev_aio.so 00:16:53.270 SYMLINK libspdk_sock_posix.so 00:16:53.270 CC module/bdev/ftl/bdev_ftl.o 00:16:53.270 CC module/bdev/ftl/bdev_ftl_rpc.o 00:16:53.270 CC module/bdev/delay/vbdev_delay_rpc.o 00:16:53.270 CC module/bdev/delay/vbdev_delay.o 00:16:53.270 CC module/bdev/gpt/vbdev_gpt.o 00:16:53.270 CC module/bdev/gpt/gpt.o 00:16:53.270 CC module/bdev/error/vbdev_error.o 00:16:53.270 CC module/bdev/nvme/bdev_nvme_rpc.o 00:16:53.270 CC module/bdev/nvme/bdev_mdns_client.o 00:16:53.270 CC module/bdev/error/vbdev_error_rpc.o 00:16:53.270 CC module/bdev/nvme/nvme_rpc.o 00:16:53.270 CC module/bdev/nvme/bdev_nvme.o 00:16:53.270 CC module/bdev/nvme/vbdev_opal.o 00:16:53.270 CC module/bdev/nvme/vbdev_opal_rpc.o 00:16:53.270 CC module/bdev/iscsi/bdev_iscsi.o 00:16:53.270 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:16:53.270 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:16:53.270 CC module/bdev/passthru/vbdev_passthru.o 00:16:53.270 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:16:53.270 CC module/bdev/null/bdev_null_rpc.o 00:16:53.270 CC module/bdev/null/bdev_null.o 00:16:53.270 CC module/bdev/aio/bdev_aio.o 00:16:53.270 CC module/blobfs/bdev/blobfs_bdev.o 00:16:53.270 CC module/bdev/aio/bdev_aio_rpc.o 00:16:53.270 CC module/bdev/virtio/bdev_virtio_blk.o 00:16:53.270 CC module/bdev/virtio/bdev_virtio_rpc.o 00:16:53.270 CC module/bdev/virtio/bdev_virtio_scsi.o 00:16:53.270 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:16:53.270 CC module/bdev/malloc/bdev_malloc.o 00:16:53.270 CC module/bdev/malloc/bdev_malloc_rpc.o 00:16:53.270 CC module/bdev/split/vbdev_split.o 00:16:53.270 CC module/bdev/split/vbdev_split_rpc.o 00:16:53.270 CC module/bdev/lvol/vbdev_lvol.o 00:16:53.270 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:16:53.270 CC module/bdev/zone_block/vbdev_zone_block.o 00:16:53.270 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:16:53.270 CC module/bdev/raid/bdev_raid.o 00:16:53.270 CC module/bdev/raid/bdev_raid_sb.o 00:16:53.270 CC module/bdev/raid/bdev_raid_rpc.o 00:16:53.270 CC module/bdev/raid/raid1.o 00:16:53.270 CC module/bdev/raid/raid0.o 00:16:53.270 CC module/bdev/raid/concat.o 00:16:53.528 LIB libspdk_blobfs_bdev.a 00:16:53.528 LIB libspdk_bdev_gpt.a 00:16:53.528 LIB libspdk_bdev_error.a 00:16:53.528 SO libspdk_blobfs_bdev.so.6.0 00:16:53.528 LIB libspdk_bdev_null.a 00:16:53.528 LIB libspdk_bdev_split.a 00:16:53.528 LIB libspdk_bdev_passthru.a 00:16:53.528 SO libspdk_bdev_gpt.so.6.0 00:16:53.528 LIB libspdk_bdev_ftl.a 00:16:53.528 SO libspdk_bdev_error.so.6.0 00:16:53.528 SO libspdk_bdev_split.so.6.0 00:16:53.528 SO libspdk_bdev_null.so.6.0 00:16:53.528 SO libspdk_bdev_passthru.so.6.0 00:16:53.528 SYMLINK libspdk_blobfs_bdev.so 00:16:53.528 SO libspdk_bdev_ftl.so.6.0 00:16:53.528 SYMLINK libspdk_bdev_gpt.so 00:16:53.528 LIB libspdk_bdev_aio.a 00:16:53.528 LIB libspdk_bdev_iscsi.a 00:16:53.528 LIB libspdk_bdev_delay.a 00:16:53.528 LIB libspdk_bdev_zone_block.a 00:16:53.528 LIB libspdk_bdev_malloc.a 00:16:53.528 SYMLINK libspdk_bdev_split.so 00:16:53.528 SYMLINK libspdk_bdev_error.so 00:16:53.528 SYMLINK libspdk_bdev_passthru.so 00:16:53.528 SYMLINK libspdk_bdev_null.so 00:16:53.528 SO libspdk_bdev_delay.so.6.0 00:16:53.528 SO libspdk_bdev_iscsi.so.6.0 00:16:53.528 SO libspdk_bdev_aio.so.6.0 00:16:53.528 SYMLINK libspdk_bdev_ftl.so 00:16:53.528 SO libspdk_bdev_zone_block.so.6.0 00:16:53.787 SO libspdk_bdev_malloc.so.6.0 00:16:53.787 SYMLINK libspdk_bdev_delay.so 00:16:53.787 SYMLINK libspdk_bdev_aio.so 00:16:53.787 SYMLINK libspdk_bdev_zone_block.so 00:16:53.787 SYMLINK libspdk_bdev_iscsi.so 00:16:53.787 SYMLINK libspdk_bdev_malloc.so 00:16:53.787 LIB libspdk_bdev_virtio.a 00:16:53.787 LIB libspdk_bdev_lvol.a 00:16:53.787 SO libspdk_bdev_virtio.so.6.0 00:16:53.787 SO libspdk_bdev_lvol.so.6.0 00:16:53.787 SYMLINK libspdk_bdev_virtio.so 00:16:53.787 SYMLINK libspdk_bdev_lvol.so 00:16:54.045 LIB libspdk_bdev_raid.a 00:16:54.045 SO libspdk_bdev_raid.so.6.0 00:16:54.302 SYMLINK libspdk_bdev_raid.so 00:16:55.236 LIB libspdk_bdev_nvme.a 00:16:55.236 SO libspdk_bdev_nvme.so.7.1 00:16:55.236 SYMLINK libspdk_bdev_nvme.so 00:16:55.802 CC module/event/subsystems/vmd/vmd.o 00:16:55.802 CC module/event/subsystems/vmd/vmd_rpc.o 00:16:55.802 CC module/event/subsystems/keyring/keyring.o 00:16:55.802 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:16:55.802 CC module/event/subsystems/fsdev/fsdev.o 00:16:55.802 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:16:55.802 CC module/event/subsystems/sock/sock.o 00:16:55.802 CC module/event/subsystems/scheduler/scheduler.o 00:16:55.802 CC module/event/subsystems/iobuf/iobuf.o 00:16:55.802 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:16:56.060 LIB libspdk_event_vmd.a 00:16:56.060 LIB libspdk_event_keyring.a 00:16:56.060 LIB libspdk_event_vfu_tgt.a 00:16:56.060 LIB libspdk_event_vhost_blk.a 00:16:56.060 LIB libspdk_event_sock.a 00:16:56.060 SO libspdk_event_keyring.so.1.0 00:16:56.060 LIB libspdk_event_fsdev.a 00:16:56.060 SO libspdk_event_vmd.so.6.0 00:16:56.060 LIB libspdk_event_scheduler.a 00:16:56.060 SO libspdk_event_vhost_blk.so.3.0 00:16:56.060 SO libspdk_event_vfu_tgt.so.3.0 00:16:56.060 LIB libspdk_event_iobuf.a 00:16:56.060 SO libspdk_event_sock.so.5.0 00:16:56.060 SO libspdk_event_fsdev.so.1.0 00:16:56.060 SO libspdk_event_scheduler.so.4.0 00:16:56.060 SO libspdk_event_iobuf.so.3.0 00:16:56.060 SYMLINK libspdk_event_keyring.so 00:16:56.060 SYMLINK libspdk_event_vmd.so 00:16:56.060 SYMLINK libspdk_event_vhost_blk.so 00:16:56.060 SYMLINK libspdk_event_vfu_tgt.so 00:16:56.060 SYMLINK libspdk_event_sock.so 00:16:56.060 SYMLINK libspdk_event_fsdev.so 00:16:56.060 SYMLINK libspdk_event_scheduler.so 00:16:56.060 SYMLINK libspdk_event_iobuf.so 00:16:56.627 CC module/event/subsystems/accel/accel.o 00:16:56.627 LIB libspdk_event_accel.a 00:16:56.627 SO libspdk_event_accel.so.6.0 00:16:56.627 SYMLINK libspdk_event_accel.so 00:16:56.886 CC module/event/subsystems/bdev/bdev.o 00:16:57.144 LIB libspdk_event_bdev.a 00:16:57.144 SO libspdk_event_bdev.so.6.0 00:16:57.144 SYMLINK libspdk_event_bdev.so 00:16:57.402 CC module/event/subsystems/scsi/scsi.o 00:16:57.402 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:16:57.402 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:16:57.402 CC module/event/subsystems/ublk/ublk.o 00:16:57.660 CC module/event/subsystems/nbd/nbd.o 00:16:57.660 LIB libspdk_event_nbd.a 00:16:57.660 LIB libspdk_event_ublk.a 00:16:57.660 LIB libspdk_event_scsi.a 00:16:57.660 SO libspdk_event_nbd.so.6.0 00:16:57.660 SO libspdk_event_ublk.so.3.0 00:16:57.660 SO libspdk_event_scsi.so.6.0 00:16:57.660 LIB libspdk_event_nvmf.a 00:16:57.660 SYMLINK libspdk_event_nbd.so 00:16:57.660 SYMLINK libspdk_event_ublk.so 00:16:57.660 SYMLINK libspdk_event_scsi.so 00:16:57.660 SO libspdk_event_nvmf.so.6.0 00:16:57.918 SYMLINK libspdk_event_nvmf.so 00:16:57.918 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:16:58.177 CC module/event/subsystems/iscsi/iscsi.o 00:16:58.177 LIB libspdk_event_vhost_scsi.a 00:16:58.177 SO libspdk_event_vhost_scsi.so.3.0 00:16:58.177 LIB libspdk_event_iscsi.a 00:16:58.177 SO libspdk_event_iscsi.so.6.0 00:16:58.177 SYMLINK libspdk_event_vhost_scsi.so 00:16:58.177 SYMLINK libspdk_event_iscsi.so 00:16:58.436 SO libspdk.so.6.0 00:16:58.436 SYMLINK libspdk.so 00:16:58.695 CXX app/trace/trace.o 00:16:58.695 CC app/trace_record/trace_record.o 00:16:58.695 CC app/spdk_top/spdk_top.o 00:16:58.695 CC app/spdk_nvme_discover/discovery_aer.o 00:16:58.695 TEST_HEADER include/spdk/accel.h 00:16:58.695 TEST_HEADER include/spdk/accel_module.h 00:16:58.695 CC app/spdk_nvme_identify/identify.o 00:16:58.695 TEST_HEADER include/spdk/barrier.h 00:16:58.695 CC app/spdk_lspci/spdk_lspci.o 00:16:58.695 CC test/rpc_client/rpc_client_test.o 00:16:58.695 TEST_HEADER include/spdk/base64.h 00:16:58.695 TEST_HEADER include/spdk/assert.h 00:16:58.695 TEST_HEADER include/spdk/bdev_zone.h 00:16:58.695 TEST_HEADER include/spdk/bdev.h 00:16:58.695 TEST_HEADER include/spdk/bdev_module.h 00:16:58.695 CC app/spdk_nvme_perf/perf.o 00:16:58.695 TEST_HEADER include/spdk/bit_array.h 00:16:58.695 TEST_HEADER include/spdk/bit_pool.h 00:16:58.695 TEST_HEADER include/spdk/blob_bdev.h 00:16:58.695 TEST_HEADER include/spdk/blobfs_bdev.h 00:16:58.695 TEST_HEADER include/spdk/blobfs.h 00:16:58.695 TEST_HEADER include/spdk/blob.h 00:16:58.695 TEST_HEADER include/spdk/config.h 00:16:58.695 TEST_HEADER include/spdk/crc16.h 00:16:58.695 TEST_HEADER include/spdk/cpuset.h 00:16:58.695 TEST_HEADER include/spdk/conf.h 00:16:58.695 TEST_HEADER include/spdk/dif.h 00:16:58.695 TEST_HEADER include/spdk/crc32.h 00:16:58.695 TEST_HEADER include/spdk/crc64.h 00:16:58.695 TEST_HEADER include/spdk/endian.h 00:16:58.695 TEST_HEADER include/spdk/dma.h 00:16:58.695 TEST_HEADER include/spdk/env.h 00:16:58.695 TEST_HEADER include/spdk/env_dpdk.h 00:16:58.695 TEST_HEADER include/spdk/event.h 00:16:58.695 TEST_HEADER include/spdk/fd_group.h 00:16:58.695 TEST_HEADER include/spdk/file.h 00:16:58.695 TEST_HEADER include/spdk/fsdev.h 00:16:58.695 TEST_HEADER include/spdk/fd.h 00:16:58.695 TEST_HEADER include/spdk/fsdev_module.h 00:16:58.695 TEST_HEADER include/spdk/fuse_dispatcher.h 00:16:58.695 TEST_HEADER include/spdk/ftl.h 00:16:58.695 TEST_HEADER include/spdk/gpt_spec.h 00:16:58.695 TEST_HEADER include/spdk/hexlify.h 00:16:58.695 TEST_HEADER include/spdk/histogram_data.h 00:16:58.695 TEST_HEADER include/spdk/idxd.h 00:16:58.695 TEST_HEADER include/spdk/idxd_spec.h 00:16:58.695 TEST_HEADER include/spdk/ioat.h 00:16:58.695 TEST_HEADER include/spdk/ioat_spec.h 00:16:58.695 TEST_HEADER include/spdk/init.h 00:16:58.695 TEST_HEADER include/spdk/iscsi_spec.h 00:16:58.695 TEST_HEADER include/spdk/json.h 00:16:58.695 TEST_HEADER include/spdk/jsonrpc.h 00:16:58.695 TEST_HEADER include/spdk/keyring.h 00:16:58.695 CC app/spdk_dd/spdk_dd.o 00:16:58.695 TEST_HEADER include/spdk/likely.h 00:16:58.695 TEST_HEADER include/spdk/keyring_module.h 00:16:58.695 TEST_HEADER include/spdk/log.h 00:16:58.695 TEST_HEADER include/spdk/lvol.h 00:16:58.695 TEST_HEADER include/spdk/md5.h 00:16:58.695 TEST_HEADER include/spdk/memory.h 00:16:58.695 TEST_HEADER include/spdk/mmio.h 00:16:58.695 TEST_HEADER include/spdk/nbd.h 00:16:58.695 TEST_HEADER include/spdk/net.h 00:16:58.695 TEST_HEADER include/spdk/notify.h 00:16:58.695 TEST_HEADER include/spdk/nvme.h 00:16:58.695 TEST_HEADER include/spdk/nvme_intel.h 00:16:58.695 TEST_HEADER include/spdk/nvme_spec.h 00:16:58.695 TEST_HEADER include/spdk/nvme_ocssd.h 00:16:58.695 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:16:58.695 CC app/iscsi_tgt/iscsi_tgt.o 00:16:58.695 TEST_HEADER include/spdk/nvmf_cmd.h 00:16:58.695 TEST_HEADER include/spdk/nvme_zns.h 00:16:58.695 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:16:58.695 TEST_HEADER include/spdk/nvmf.h 00:16:58.695 TEST_HEADER include/spdk/nvmf_transport.h 00:16:58.695 TEST_HEADER include/spdk/opal_spec.h 00:16:58.695 TEST_HEADER include/spdk/opal.h 00:16:58.695 TEST_HEADER include/spdk/nvmf_spec.h 00:16:58.695 TEST_HEADER include/spdk/pci_ids.h 00:16:58.695 TEST_HEADER include/spdk/pipe.h 00:16:58.695 TEST_HEADER include/spdk/reduce.h 00:16:58.695 TEST_HEADER include/spdk/queue.h 00:16:58.695 TEST_HEADER include/spdk/rpc.h 00:16:58.695 TEST_HEADER include/spdk/scheduler.h 00:16:58.695 TEST_HEADER include/spdk/scsi.h 00:16:58.695 CC examples/interrupt_tgt/interrupt_tgt.o 00:16:58.695 TEST_HEADER include/spdk/scsi_spec.h 00:16:58.695 TEST_HEADER include/spdk/stdinc.h 00:16:58.695 CC app/nvmf_tgt/nvmf_main.o 00:16:58.695 TEST_HEADER include/spdk/string.h 00:16:58.695 TEST_HEADER include/spdk/sock.h 00:16:58.695 TEST_HEADER include/spdk/thread.h 00:16:58.695 TEST_HEADER include/spdk/trace.h 00:16:58.695 TEST_HEADER include/spdk/trace_parser.h 00:16:58.695 TEST_HEADER include/spdk/tree.h 00:16:58.695 TEST_HEADER include/spdk/ublk.h 00:16:58.695 TEST_HEADER include/spdk/util.h 00:16:58.695 TEST_HEADER include/spdk/uuid.h 00:16:58.695 TEST_HEADER include/spdk/version.h 00:16:58.695 TEST_HEADER include/spdk/vfio_user_pci.h 00:16:58.695 TEST_HEADER include/spdk/vhost.h 00:16:58.695 TEST_HEADER include/spdk/vfio_user_spec.h 00:16:58.695 TEST_HEADER include/spdk/vmd.h 00:16:58.695 TEST_HEADER include/spdk/xor.h 00:16:58.695 CC app/spdk_tgt/spdk_tgt.o 00:16:58.695 TEST_HEADER include/spdk/zipf.h 00:16:58.695 CXX test/cpp_headers/accel.o 00:16:58.695 CXX test/cpp_headers/accel_module.o 00:16:58.695 CXX test/cpp_headers/assert.o 00:16:58.695 CXX test/cpp_headers/barrier.o 00:16:58.695 CXX test/cpp_headers/base64.o 00:16:58.695 CXX test/cpp_headers/bdev.o 00:16:58.695 CXX test/cpp_headers/bdev_zone.o 00:16:58.695 CXX test/cpp_headers/bdev_module.o 00:16:58.962 CXX test/cpp_headers/bit_array.o 00:16:58.962 CXX test/cpp_headers/bit_pool.o 00:16:58.962 CXX test/cpp_headers/blob_bdev.o 00:16:58.962 CXX test/cpp_headers/blobfs.o 00:16:58.962 CXX test/cpp_headers/blob.o 00:16:58.962 CXX test/cpp_headers/conf.o 00:16:58.962 CXX test/cpp_headers/config.o 00:16:58.962 CXX test/cpp_headers/blobfs_bdev.o 00:16:58.962 CXX test/cpp_headers/cpuset.o 00:16:58.962 CXX test/cpp_headers/crc32.o 00:16:58.962 CXX test/cpp_headers/dif.o 00:16:58.962 CXX test/cpp_headers/crc16.o 00:16:58.962 CXX test/cpp_headers/crc64.o 00:16:58.962 CXX test/cpp_headers/env_dpdk.o 00:16:58.962 CXX test/cpp_headers/endian.o 00:16:58.962 CXX test/cpp_headers/dma.o 00:16:58.962 CXX test/cpp_headers/env.o 00:16:58.962 CXX test/cpp_headers/fd_group.o 00:16:58.962 CXX test/cpp_headers/event.o 00:16:58.962 CXX test/cpp_headers/file.o 00:16:58.962 CXX test/cpp_headers/fd.o 00:16:58.962 CXX test/cpp_headers/fsdev.o 00:16:58.962 CXX test/cpp_headers/fsdev_module.o 00:16:58.962 CXX test/cpp_headers/ftl.o 00:16:58.962 CXX test/cpp_headers/gpt_spec.o 00:16:58.962 CXX test/cpp_headers/fuse_dispatcher.o 00:16:58.962 CXX test/cpp_headers/histogram_data.o 00:16:58.962 CXX test/cpp_headers/hexlify.o 00:16:58.962 CXX test/cpp_headers/idxd.o 00:16:58.962 CXX test/cpp_headers/idxd_spec.o 00:16:58.962 CXX test/cpp_headers/init.o 00:16:58.962 CXX test/cpp_headers/ioat_spec.o 00:16:58.962 CXX test/cpp_headers/ioat.o 00:16:58.962 CXX test/cpp_headers/iscsi_spec.o 00:16:58.962 CXX test/cpp_headers/keyring.o 00:16:58.962 CXX test/cpp_headers/json.o 00:16:58.962 CXX test/cpp_headers/jsonrpc.o 00:16:58.962 CXX test/cpp_headers/keyring_module.o 00:16:58.962 CXX test/cpp_headers/likely.o 00:16:58.962 CXX test/cpp_headers/log.o 00:16:58.962 CXX test/cpp_headers/lvol.o 00:16:58.962 CXX test/cpp_headers/md5.o 00:16:58.962 CXX test/cpp_headers/memory.o 00:16:58.962 CXX test/cpp_headers/mmio.o 00:16:58.962 CXX test/cpp_headers/nbd.o 00:16:58.962 CXX test/cpp_headers/net.o 00:16:58.962 CXX test/cpp_headers/notify.o 00:16:58.962 CXX test/cpp_headers/nvme.o 00:16:58.962 CXX test/cpp_headers/nvme_intel.o 00:16:58.962 CXX test/cpp_headers/nvme_ocssd_spec.o 00:16:58.962 CXX test/cpp_headers/nvme_ocssd.o 00:16:58.962 CXX test/cpp_headers/nvme_spec.o 00:16:58.962 CXX test/cpp_headers/nvme_zns.o 00:16:58.962 CXX test/cpp_headers/nvmf_cmd.o 00:16:58.962 CXX test/cpp_headers/nvmf_fc_spec.o 00:16:58.962 CXX test/cpp_headers/nvmf.o 00:16:58.962 CXX test/cpp_headers/nvmf_spec.o 00:16:58.962 CXX test/cpp_headers/nvmf_transport.o 00:16:58.962 CXX test/cpp_headers/opal.o 00:16:58.962 CC test/env/vtophys/vtophys.o 00:16:58.962 CC test/app/histogram_perf/histogram_perf.o 00:16:58.962 CC app/fio/nvme/fio_plugin.o 00:16:58.962 CC test/app/jsoncat/jsoncat.o 00:16:58.962 CXX test/cpp_headers/opal_spec.o 00:16:58.962 CC test/env/memory/memory_ut.o 00:16:58.962 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:16:58.962 CC examples/ioat/perf/perf.o 00:16:58.962 CC test/app/stub/stub.o 00:16:58.962 CC test/thread/poller_perf/poller_perf.o 00:16:58.962 CC test/env/pci/pci_ut.o 00:16:58.962 CC examples/ioat/verify/verify.o 00:16:58.962 CC examples/util/zipf/zipf.o 00:16:58.962 CC app/fio/bdev/fio_plugin.o 00:16:58.962 CC test/dma/test_dma/test_dma.o 00:16:58.962 CC test/app/bdev_svc/bdev_svc.o 00:16:58.962 LINK spdk_lspci 00:16:59.230 LINK rpc_client_test 00:16:59.230 LINK spdk_nvme_discover 00:16:59.230 CC test/env/mem_callbacks/mem_callbacks.o 00:16:59.491 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:16:59.491 LINK spdk_trace_record 00:16:59.491 LINK histogram_perf 00:16:59.491 CXX test/cpp_headers/pci_ids.o 00:16:59.491 LINK nvmf_tgt 00:16:59.491 CXX test/cpp_headers/pipe.o 00:16:59.491 LINK env_dpdk_post_init 00:16:59.491 LINK interrupt_tgt 00:16:59.491 LINK spdk_tgt 00:16:59.491 CXX test/cpp_headers/queue.o 00:16:59.491 CXX test/cpp_headers/reduce.o 00:16:59.491 CXX test/cpp_headers/rpc.o 00:16:59.491 CXX test/cpp_headers/scsi.o 00:16:59.491 CXX test/cpp_headers/scheduler.o 00:16:59.491 CXX test/cpp_headers/scsi_spec.o 00:16:59.491 CXX test/cpp_headers/sock.o 00:16:59.491 LINK iscsi_tgt 00:16:59.491 CXX test/cpp_headers/stdinc.o 00:16:59.491 CXX test/cpp_headers/string.o 00:16:59.491 CXX test/cpp_headers/thread.o 00:16:59.491 LINK vtophys 00:16:59.491 CXX test/cpp_headers/trace.o 00:16:59.491 LINK jsoncat 00:16:59.491 CXX test/cpp_headers/trace_parser.o 00:16:59.491 CXX test/cpp_headers/tree.o 00:16:59.491 CXX test/cpp_headers/ublk.o 00:16:59.491 LINK stub 00:16:59.491 CXX test/cpp_headers/util.o 00:16:59.491 CXX test/cpp_headers/uuid.o 00:16:59.491 CXX test/cpp_headers/version.o 00:16:59.491 CXX test/cpp_headers/vfio_user_pci.o 00:16:59.491 CXX test/cpp_headers/vfio_user_spec.o 00:16:59.491 CXX test/cpp_headers/vhost.o 00:16:59.491 CXX test/cpp_headers/vmd.o 00:16:59.491 CXX test/cpp_headers/xor.o 00:16:59.491 CXX test/cpp_headers/zipf.o 00:16:59.492 LINK ioat_perf 00:16:59.492 LINK zipf 00:16:59.492 LINK bdev_svc 00:16:59.492 LINK poller_perf 00:16:59.750 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:16:59.750 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:16:59.750 LINK spdk_trace 00:16:59.750 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:16:59.750 LINK verify 00:16:59.750 LINK spdk_dd 00:16:59.750 LINK pci_ut 00:16:59.750 LINK spdk_nvme 00:17:00.009 LINK spdk_bdev 00:17:00.009 LINK nvme_fuzz 00:17:00.009 LINK test_dma 00:17:00.009 CC app/vhost/vhost.o 00:17:00.009 CC test/event/event_perf/event_perf.o 00:17:00.009 CC examples/vmd/lsvmd/lsvmd.o 00:17:00.009 CC test/event/reactor/reactor.o 00:17:00.009 CC examples/sock/hello_world/hello_sock.o 00:17:00.009 CC test/event/reactor_perf/reactor_perf.o 00:17:00.009 CC examples/vmd/led/led.o 00:17:00.009 CC examples/idxd/perf/perf.o 00:17:00.009 LINK spdk_nvme_perf 00:17:00.009 CC test/event/app_repeat/app_repeat.o 00:17:00.009 LINK vhost_fuzz 00:17:00.009 CC examples/thread/thread/thread_ex.o 00:17:00.009 LINK spdk_nvme_identify 00:17:00.009 CC test/event/scheduler/scheduler.o 00:17:00.009 LINK mem_callbacks 00:17:00.009 LINK spdk_top 00:17:00.009 LINK reactor 00:17:00.266 LINK lsvmd 00:17:00.266 LINK event_perf 00:17:00.266 LINK reactor_perf 00:17:00.266 LINK led 00:17:00.266 LINK vhost 00:17:00.266 LINK app_repeat 00:17:00.266 LINK hello_sock 00:17:00.266 LINK scheduler 00:17:00.266 LINK thread 00:17:00.266 LINK idxd_perf 00:17:00.266 LINK memory_ut 00:17:00.524 CC test/nvme/overhead/overhead.o 00:17:00.524 CC test/nvme/simple_copy/simple_copy.o 00:17:00.524 CC test/nvme/reset/reset.o 00:17:00.524 CC test/nvme/startup/startup.o 00:17:00.524 CC test/nvme/reserve/reserve.o 00:17:00.524 CC test/nvme/err_injection/err_injection.o 00:17:00.524 CC test/nvme/e2edp/nvme_dp.o 00:17:00.524 CC test/nvme/connect_stress/connect_stress.o 00:17:00.524 CC test/nvme/fused_ordering/fused_ordering.o 00:17:00.524 CC test/nvme/compliance/nvme_compliance.o 00:17:00.524 CC test/nvme/aer/aer.o 00:17:00.524 CC test/nvme/sgl/sgl.o 00:17:00.524 CC test/nvme/fdp/fdp.o 00:17:00.524 CC test/nvme/boot_partition/boot_partition.o 00:17:00.524 CC test/nvme/doorbell_aers/doorbell_aers.o 00:17:00.524 CC test/nvme/cuse/cuse.o 00:17:00.524 CC test/accel/dif/dif.o 00:17:00.524 CC test/blobfs/mkfs/mkfs.o 00:17:00.524 CC test/lvol/esnap/esnap.o 00:17:00.524 LINK err_injection 00:17:00.524 LINK boot_partition 00:17:00.524 LINK startup 00:17:00.524 LINK connect_stress 00:17:00.524 LINK reserve 00:17:00.524 LINK doorbell_aers 00:17:00.524 LINK simple_copy 00:17:00.524 LINK fused_ordering 00:17:00.782 LINK nvme_dp 00:17:00.782 LINK overhead 00:17:00.782 LINK reset 00:17:00.782 LINK sgl 00:17:00.782 CC examples/nvme/nvme_manage/nvme_manage.o 00:17:00.782 CC examples/nvme/hotplug/hotplug.o 00:17:00.782 CC examples/nvme/abort/abort.o 00:17:00.782 CC examples/nvme/arbitration/arbitration.o 00:17:00.782 CC examples/nvme/reconnect/reconnect.o 00:17:00.782 CC examples/nvme/cmb_copy/cmb_copy.o 00:17:00.782 LINK aer 00:17:00.782 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:17:00.782 LINK mkfs 00:17:00.782 CC examples/nvme/hello_world/hello_world.o 00:17:00.782 LINK nvme_compliance 00:17:00.782 LINK fdp 00:17:00.782 CC examples/accel/perf/accel_perf.o 00:17:00.782 CC examples/fsdev/hello_world/hello_fsdev.o 00:17:00.782 CC examples/blob/hello_world/hello_blob.o 00:17:00.782 CC examples/blob/cli/blobcli.o 00:17:00.782 LINK pmr_persistence 00:17:00.782 LINK cmb_copy 00:17:01.041 LINK hello_world 00:17:01.041 LINK hotplug 00:17:01.041 LINK reconnect 00:17:01.041 LINK arbitration 00:17:01.041 LINK abort 00:17:01.041 LINK iscsi_fuzz 00:17:01.041 LINK dif 00:17:01.041 LINK hello_blob 00:17:01.041 LINK hello_fsdev 00:17:01.041 LINK nvme_manage 00:17:01.300 LINK accel_perf 00:17:01.300 LINK blobcli 00:17:01.558 LINK cuse 00:17:01.558 CC test/bdev/bdevio/bdevio.o 00:17:01.558 CC examples/bdev/hello_world/hello_bdev.o 00:17:01.558 CC examples/bdev/bdevperf/bdevperf.o 00:17:01.816 LINK bdevio 00:17:01.816 LINK hello_bdev 00:17:02.435 LINK bdevperf 00:17:02.693 CC examples/nvmf/nvmf/nvmf.o 00:17:02.951 LINK nvmf 00:17:04.328 LINK esnap 00:17:04.328 00:17:04.328 real 0m55.045s 00:17:04.328 user 8m0.028s 00:17:04.328 sys 3m31.266s 00:17:04.328 10:28:05 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:17:04.328 10:28:05 make -- common/autotest_common.sh@10 -- $ set +x 00:17:04.328 ************************************ 00:17:04.328 END TEST make 00:17:04.328 ************************************ 00:17:04.587 10:28:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:17:04.587 10:28:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:04.587 10:28:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:04.587 10:28:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:04.587 10:28:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:17:04.587 10:28:05 -- pm/common@44 -- $ pid=386856 00:17:04.587 10:28:05 -- pm/common@50 -- $ kill -TERM 386856 00:17:04.587 10:28:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:04.587 10:28:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:17:04.587 10:28:05 -- pm/common@44 -- $ pid=386858 00:17:04.587 10:28:05 -- pm/common@50 -- $ kill -TERM 386858 00:17:04.587 10:28:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:04.587 10:28:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:17:04.587 10:28:05 -- pm/common@44 -- $ pid=386860 00:17:04.587 10:28:05 -- pm/common@50 -- $ kill -TERM 386860 00:17:04.587 10:28:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:04.587 10:28:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:17:04.587 10:28:05 -- pm/common@44 -- $ pid=386883 00:17:04.587 10:28:05 -- pm/common@50 -- $ sudo -E kill -TERM 386883 00:17:04.587 10:28:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:17:04.587 10:28:05 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:17:04.587 10:28:05 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:04.587 10:28:05 -- common/autotest_common.sh@1711 -- # lcov --version 00:17:04.587 10:28:05 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:04.587 10:28:05 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:04.587 10:28:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:04.587 10:28:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.587 10:28:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.587 10:28:05 -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.587 10:28:05 -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.587 10:28:05 -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.587 10:28:05 -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.587 10:28:05 -- scripts/common.sh@338 -- # local 'op=<' 00:17:04.587 10:28:05 -- scripts/common.sh@340 -- # ver1_l=2 00:17:04.587 10:28:05 -- scripts/common.sh@341 -- # ver2_l=1 00:17:04.587 10:28:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.587 10:28:05 -- scripts/common.sh@344 -- # case "$op" in 00:17:04.587 10:28:05 -- scripts/common.sh@345 -- # : 1 00:17:04.587 10:28:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.587 10:28:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.587 10:28:05 -- scripts/common.sh@365 -- # decimal 1 00:17:04.587 10:28:05 -- scripts/common.sh@353 -- # local d=1 00:17:04.587 10:28:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.587 10:28:05 -- scripts/common.sh@355 -- # echo 1 00:17:04.587 10:28:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.587 10:28:05 -- scripts/common.sh@366 -- # decimal 2 00:17:04.587 10:28:05 -- scripts/common.sh@353 -- # local d=2 00:17:04.587 10:28:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.587 10:28:05 -- scripts/common.sh@355 -- # echo 2 00:17:04.587 10:28:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:17:04.587 10:28:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.587 10:28:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.587 10:28:05 -- scripts/common.sh@368 -- # return 0 00:17:04.587 10:28:05 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.587 10:28:05 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:04.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.587 --rc genhtml_branch_coverage=1 00:17:04.587 --rc genhtml_function_coverage=1 00:17:04.587 --rc genhtml_legend=1 00:17:04.587 --rc geninfo_all_blocks=1 00:17:04.587 --rc geninfo_unexecuted_blocks=1 00:17:04.587 00:17:04.587 ' 00:17:04.587 10:28:05 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:04.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.587 --rc genhtml_branch_coverage=1 00:17:04.587 --rc genhtml_function_coverage=1 00:17:04.587 --rc genhtml_legend=1 00:17:04.587 --rc geninfo_all_blocks=1 00:17:04.587 --rc geninfo_unexecuted_blocks=1 00:17:04.587 00:17:04.587 ' 00:17:04.587 10:28:05 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:04.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.587 --rc genhtml_branch_coverage=1 00:17:04.587 --rc genhtml_function_coverage=1 00:17:04.587 --rc genhtml_legend=1 00:17:04.587 --rc geninfo_all_blocks=1 00:17:04.587 --rc geninfo_unexecuted_blocks=1 00:17:04.587 00:17:04.587 ' 00:17:04.587 10:28:05 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:04.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.587 --rc genhtml_branch_coverage=1 00:17:04.587 --rc genhtml_function_coverage=1 00:17:04.587 --rc genhtml_legend=1 00:17:04.587 --rc geninfo_all_blocks=1 00:17:04.587 --rc geninfo_unexecuted_blocks=1 00:17:04.587 00:17:04.587 ' 00:17:04.587 10:28:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.587 10:28:05 -- nvmf/common.sh@7 -- # uname -s 00:17:04.587 10:28:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.587 10:28:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.587 10:28:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.587 10:28:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.587 10:28:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.587 10:28:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.587 10:28:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.587 10:28:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.587 10:28:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.587 10:28:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.587 10:28:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.587 10:28:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.587 10:28:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.587 10:28:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.587 10:28:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.587 10:28:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.587 10:28:05 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:04.587 10:28:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:17:04.587 10:28:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.587 10:28:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.587 10:28:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.587 10:28:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.587 10:28:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.588 10:28:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.588 10:28:05 -- paths/export.sh@5 -- # export PATH 00:17:04.588 10:28:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.588 10:28:05 -- nvmf/common.sh@51 -- # : 0 00:17:04.588 10:28:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:04.588 10:28:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:04.588 10:28:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.588 10:28:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.588 10:28:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.588 10:28:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:04.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:04.588 10:28:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:04.588 10:28:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:04.588 10:28:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:04.588 10:28:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:17:04.588 10:28:05 -- spdk/autotest.sh@32 -- # uname -s 00:17:04.588 10:28:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:17:04.588 10:28:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:17:04.588 10:28:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:17:04.847 10:28:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:17:04.847 10:28:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:17:04.847 10:28:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:17:04.847 10:28:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:17:04.847 10:28:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:17:04.847 10:28:05 -- spdk/autotest.sh@48 -- # udevadm_pid=449577 00:17:04.847 10:28:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:17:04.847 10:28:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:17:04.847 10:28:05 -- pm/common@17 -- # local monitor 00:17:04.847 10:28:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:04.847 10:28:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:04.847 10:28:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:04.847 10:28:05 -- pm/common@21 -- # date +%s 00:17:04.847 10:28:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:04.847 10:28:05 -- pm/common@21 -- # date +%s 00:17:04.847 10:28:05 -- pm/common@25 -- # sleep 1 00:17:04.847 10:28:05 -- pm/common@21 -- # date +%s 00:17:04.847 10:28:05 -- pm/common@21 -- # date +%s 00:17:04.847 10:28:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733736485 00:17:04.847 10:28:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733736485 00:17:04.847 10:28:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733736485 00:17:04.847 10:28:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733736485 00:17:04.847 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733736485_collect-vmstat.pm.log 00:17:04.847 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733736485_collect-cpu-load.pm.log 00:17:04.847 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733736485_collect-cpu-temp.pm.log 00:17:04.847 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733736485_collect-bmc-pm.bmc.pm.log 00:17:05.819 10:28:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:17:05.819 10:28:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:17:05.819 10:28:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.819 10:28:06 -- common/autotest_common.sh@10 -- # set +x 00:17:05.819 10:28:06 -- spdk/autotest.sh@59 -- # create_test_list 00:17:05.819 10:28:06 -- common/autotest_common.sh@752 -- # xtrace_disable 00:17:05.819 10:28:06 -- common/autotest_common.sh@10 -- # set +x 00:17:05.819 10:28:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:17:05.819 10:28:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:17:05.819 10:28:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:17:05.819 10:28:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:17:05.819 10:28:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:17:05.819 10:28:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:17:05.819 10:28:06 -- common/autotest_common.sh@1457 -- # uname 00:17:05.819 10:28:06 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:17:05.819 10:28:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:17:05.819 10:28:06 -- common/autotest_common.sh@1477 -- # uname 00:17:05.819 10:28:06 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:17:05.819 10:28:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:17:05.819 10:28:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:17:05.819 lcov: LCOV version 1.15 00:17:05.819 10:28:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:17:18.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:17:18.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:17:33.118 10:28:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:17:33.118 10:28:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.118 10:28:32 -- common/autotest_common.sh@10 -- # set +x 00:17:33.118 10:28:32 -- spdk/autotest.sh@78 -- # rm -f 00:17:33.118 10:28:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:17:33.687 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:17:33.687 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:17:33.687 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:17:33.687 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:17:33.687 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:17:33.947 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:17:34.206 10:28:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:17:34.206 10:28:35 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:17:34.206 10:28:35 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:17:34.206 10:28:35 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:17:34.206 10:28:35 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:17:34.206 10:28:35 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:17:34.206 10:28:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:34.206 10:28:35 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:17:34.206 10:28:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:34.206 10:28:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:17:34.206 10:28:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:34.206 10:28:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:34.206 10:28:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:34.206 10:28:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:17:34.206 10:28:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:34.206 10:28:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:34.206 10:28:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:17:34.206 10:28:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:17:34.206 10:28:35 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:17:34.206 No valid GPT data, bailing 00:17:34.206 10:28:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:34.206 10:28:35 -- scripts/common.sh@394 -- # pt= 00:17:34.206 10:28:35 -- scripts/common.sh@395 -- # return 1 00:17:34.206 10:28:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:17:34.206 1+0 records in 00:17:34.206 1+0 records out 00:17:34.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00141536 s, 741 MB/s 00:17:34.206 10:28:35 -- spdk/autotest.sh@105 -- # sync 00:17:34.206 10:28:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:17:34.206 10:28:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:17:34.206 10:28:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:17:39.483 10:28:40 -- spdk/autotest.sh@111 -- # uname -s 00:17:39.483 10:28:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:17:39.483 10:28:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:17:39.483 10:28:40 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:17:42.772 Hugepages 00:17:42.772 node hugesize free / total 00:17:42.772 node0 1048576kB 0 / 0 00:17:42.772 node0 2048kB 0 / 0 00:17:42.772 node1 1048576kB 0 / 0 00:17:42.772 node1 2048kB 0 / 0 00:17:42.772 00:17:42.772 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:42.772 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:17:42.772 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:17:42.772 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:17:42.772 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:17:42.772 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:17:42.772 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:17:42.772 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:17:42.772 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:17:42.772 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:17:42.772 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:17:42.772 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:17:42.772 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:17:42.772 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:17:42.772 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:17:42.772 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:17:42.772 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:17:42.772 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:17:42.772 10:28:43 -- spdk/autotest.sh@117 -- # uname -s 00:17:42.772 10:28:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:17:42.772 10:28:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:17:42.772 10:28:43 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:17:45.304 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:17:45.304 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:17:45.871 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:17:46.130 10:28:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:17:47.065 10:28:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:17:47.065 10:28:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:17:47.065 10:28:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:17:47.065 10:28:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:17:47.065 10:28:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:47.065 10:28:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:17:47.065 10:28:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:47.065 10:28:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:47.065 10:28:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:47.065 10:28:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:17:47.065 10:28:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:17:47.065 10:28:48 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:17:49.600 Waiting for block devices as requested 00:17:49.600 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:17:49.600 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:17:49.860 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:17:49.860 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:17:49.860 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:17:49.860 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:17:50.120 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:17:50.120 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:17:50.120 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:17:50.120 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:17:50.378 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:17:50.378 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:17:50.378 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:17:50.637 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:17:50.637 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:17:50.637 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:17:50.896 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:17:50.896 10:28:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:17:50.896 10:28:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:17:50.896 10:28:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:17:50.896 10:28:51 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:17:50.896 10:28:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:17:50.896 10:28:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:17:50.896 10:28:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:17:50.896 10:28:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:17:50.896 10:28:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:17:50.896 10:28:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:17:50.896 10:28:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:17:50.896 10:28:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:17:50.896 10:28:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:17:50.896 10:28:51 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:17:50.896 10:28:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:17:50.896 10:28:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:17:50.896 10:28:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:17:50.896 10:28:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:17:50.896 10:28:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:17:50.896 10:28:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:17:50.896 10:28:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:17:50.896 10:28:51 -- common/autotest_common.sh@1543 -- # continue 00:17:50.896 10:28:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:17:50.896 10:28:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:50.896 10:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:50.896 10:28:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:17:50.896 10:28:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:50.896 10:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:50.896 10:28:51 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:17:53.431 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:17:53.431 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:17:53.431 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:17:53.431 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:17:53.431 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:17:53.431 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:17:53.431 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:17:53.431 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:17:53.431 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:17:53.694 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:17:53.694 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:17:53.694 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:17:53.694 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:17:53.694 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:17:53.694 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:17:53.694 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:17:54.627 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:17:54.627 10:28:55 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:17:54.627 10:28:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.627 10:28:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.627 10:28:55 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:17:54.627 10:28:55 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:17:54.627 10:28:55 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:17:54.627 10:28:55 -- common/autotest_common.sh@1563 -- # bdfs=() 00:17:54.627 10:28:55 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:17:54.627 10:28:55 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:17:54.627 10:28:55 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:17:54.627 10:28:55 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:17:54.627 10:28:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:54.627 10:28:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:17:54.627 10:28:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:54.627 10:28:55 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:54.627 10:28:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:54.627 10:28:55 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:17:54.627 10:28:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:17:54.627 10:28:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:17:54.627 10:28:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:17:54.627 10:28:55 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:17:54.627 10:28:55 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:17:54.627 10:28:55 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:17:54.627 10:28:55 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:17:54.627 10:28:55 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:17:54.627 10:28:55 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:17:54.627 10:28:55 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=463995 00:17:54.627 10:28:55 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:54.627 10:28:55 -- common/autotest_common.sh@1585 -- # waitforlisten 463995 00:17:54.627 10:28:55 -- common/autotest_common.sh@835 -- # '[' -z 463995 ']' 00:17:54.627 10:28:55 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.627 10:28:55 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.627 10:28:55 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.627 10:28:55 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.627 10:28:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.885 [2024-12-09 10:28:55.812595] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:17:54.885 [2024-12-09 10:28:55.812646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463995 ] 00:17:54.885 [2024-12-09 10:28:55.878718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.885 [2024-12-09 10:28:55.921186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.143 10:28:56 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.143 10:28:56 -- common/autotest_common.sh@868 -- # return 0 00:17:55.143 10:28:56 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:17:55.143 10:28:56 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:17:55.143 10:28:56 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:17:58.430 nvme0n1 00:17:58.430 10:28:59 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:17:58.430 [2024-12-09 10:28:59.301807] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:17:58.430 request: 00:17:58.430 { 00:17:58.430 "nvme_ctrlr_name": "nvme0", 00:17:58.430 "password": "test", 00:17:58.430 "method": "bdev_nvme_opal_revert", 00:17:58.430 "req_id": 1 00:17:58.430 } 00:17:58.430 Got JSON-RPC error response 00:17:58.430 response: 00:17:58.430 { 00:17:58.430 "code": -32602, 00:17:58.430 "message": "Invalid parameters" 00:17:58.430 } 00:17:58.430 10:28:59 -- common/autotest_common.sh@1591 -- # true 00:17:58.430 10:28:59 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:17:58.430 10:28:59 -- common/autotest_common.sh@1595 -- # killprocess 463995 00:17:58.430 10:28:59 -- common/autotest_common.sh@954 -- # '[' -z 463995 ']' 00:17:58.430 10:28:59 -- common/autotest_common.sh@958 -- # kill -0 463995 00:17:58.430 10:28:59 -- common/autotest_common.sh@959 -- # uname 00:17:58.430 10:28:59 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.430 10:28:59 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463995 00:17:58.430 10:28:59 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.430 10:28:59 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.430 10:28:59 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463995' 00:17:58.430 killing process with pid 463995 00:17:58.431 10:28:59 -- common/autotest_common.sh@973 -- # kill 463995 00:17:58.431 10:28:59 -- common/autotest_common.sh@978 -- # wait 463995 00:18:00.336 10:29:01 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:18:00.336 10:29:01 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:18:00.336 10:29:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:18:00.336 10:29:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:18:00.336 10:29:01 -- spdk/autotest.sh@149 -- # timing_enter lib 00:18:00.336 10:29:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.336 10:29:01 -- common/autotest_common.sh@10 -- # set +x 00:18:00.336 10:29:01 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:18:00.336 10:29:01 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:18:00.336 10:29:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:00.336 10:29:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.336 10:29:01 -- common/autotest_common.sh@10 -- # set +x 00:18:00.336 ************************************ 00:18:00.336 START TEST env 00:18:00.336 ************************************ 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:18:00.336 * Looking for test storage... 00:18:00.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1711 -- # lcov --version 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:00.336 10:29:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.336 10:29:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.336 10:29:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.336 10:29:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.336 10:29:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.336 10:29:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.336 10:29:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.336 10:29:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.336 10:29:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.336 10:29:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.336 10:29:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.336 10:29:01 env -- scripts/common.sh@344 -- # case "$op" in 00:18:00.336 10:29:01 env -- scripts/common.sh@345 -- # : 1 00:18:00.336 10:29:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.336 10:29:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.336 10:29:01 env -- scripts/common.sh@365 -- # decimal 1 00:18:00.336 10:29:01 env -- scripts/common.sh@353 -- # local d=1 00:18:00.336 10:29:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.336 10:29:01 env -- scripts/common.sh@355 -- # echo 1 00:18:00.336 10:29:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.336 10:29:01 env -- scripts/common.sh@366 -- # decimal 2 00:18:00.336 10:29:01 env -- scripts/common.sh@353 -- # local d=2 00:18:00.336 10:29:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.336 10:29:01 env -- scripts/common.sh@355 -- # echo 2 00:18:00.336 10:29:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.336 10:29:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.336 10:29:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.336 10:29:01 env -- scripts/common.sh@368 -- # return 0 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:00.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.336 --rc genhtml_branch_coverage=1 00:18:00.336 --rc genhtml_function_coverage=1 00:18:00.336 --rc genhtml_legend=1 00:18:00.336 --rc geninfo_all_blocks=1 00:18:00.336 --rc geninfo_unexecuted_blocks=1 00:18:00.336 00:18:00.336 ' 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:00.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.336 --rc genhtml_branch_coverage=1 00:18:00.336 --rc genhtml_function_coverage=1 00:18:00.336 --rc genhtml_legend=1 00:18:00.336 --rc geninfo_all_blocks=1 00:18:00.336 --rc geninfo_unexecuted_blocks=1 00:18:00.336 00:18:00.336 ' 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:00.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.336 --rc genhtml_branch_coverage=1 00:18:00.336 --rc genhtml_function_coverage=1 00:18:00.336 --rc genhtml_legend=1 00:18:00.336 --rc geninfo_all_blocks=1 00:18:00.336 --rc geninfo_unexecuted_blocks=1 00:18:00.336 00:18:00.336 ' 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:00.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.336 --rc genhtml_branch_coverage=1 00:18:00.336 --rc genhtml_function_coverage=1 00:18:00.336 --rc genhtml_legend=1 00:18:00.336 --rc geninfo_all_blocks=1 00:18:00.336 --rc geninfo_unexecuted_blocks=1 00:18:00.336 00:18:00.336 ' 00:18:00.336 10:29:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:00.336 10:29:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.336 10:29:01 env -- common/autotest_common.sh@10 -- # set +x 00:18:00.336 ************************************ 00:18:00.336 START TEST env_memory 00:18:00.336 ************************************ 00:18:00.336 10:29:01 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:18:00.336 00:18:00.336 00:18:00.336 CUnit - A unit testing framework for C - Version 2.1-3 00:18:00.336 http://cunit.sourceforge.net/ 00:18:00.336 00:18:00.336 00:18:00.336 Suite: mem_map_2mb 00:18:00.336 Test: alloc and free memory map ...[2024-12-09 10:29:01.322982] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 310:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:18:00.336 passed 00:18:00.336 Test: mem map translation ...[2024-12-09 10:29:01.342427] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:18:00.336 [2024-12-09 10:29:01.342443] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:18:00.336 [2024-12-09 10:29:01.342489] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 622:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:18:00.336 [2024-12-09 10:29:01.342496] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 638:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:18:00.336 passed 00:18:00.336 Test: mem map registration ...[2024-12-09 10:29:01.384179] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:18:00.337 [2024-12-09 10:29:01.384206] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:18:00.337 passed 00:18:00.337 Test: mem map adjacent registrations ...passed 00:18:00.337 Suite: mem_map_4kb 00:18:00.337 Test: alloc and free memory map ...[2024-12-09 10:29:01.491604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 310:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:18:00.337 passed 00:18:00.596 Test: mem map translation ...[2024-12-09 10:29:01.516347] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=4096 len=1234 00:18:00.596 [2024-12-09 10:29:01.516365] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=4096 00:18:00.596 [2024-12-09 10:29:01.534808] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 622:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:18:00.596 [2024-12-09 10:29:01.534820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 638:spdk_mem_map_set_translation: *ERROR*: could not get 0xfffffffff000 map 00:18:00.596 passed 00:18:00.596 Test: mem map registration ...[2024-12-09 10:29:01.608869] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=1000 len=1234 00:18:00.596 [2024-12-09 10:29:01.608891] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=4096 00:18:00.596 passed 00:18:00.596 Test: mem map adjacent registrations ...passed 00:18:00.596 00:18:00.596 Run Summary: Type Total Ran Passed Failed Inactive 00:18:00.596 suites 2 2 n/a 0 0 00:18:00.596 tests 8 8 8 0 0 00:18:00.596 asserts 304 304 304 0 n/a 00:18:00.596 00:18:00.596 Elapsed time = 0.407 seconds 00:18:00.596 00:18:00.596 real 0m0.421s 00:18:00.596 user 0m0.402s 00:18:00.596 sys 0m0.018s 00:18:00.596 10:29:01 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.596 10:29:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:18:00.596 ************************************ 00:18:00.596 END TEST env_memory 00:18:00.596 ************************************ 00:18:00.596 10:29:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:18:00.596 10:29:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:00.596 10:29:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.596 10:29:01 env -- common/autotest_common.sh@10 -- # set +x 00:18:00.596 ************************************ 00:18:00.596 START TEST env_vtophys 00:18:00.596 ************************************ 00:18:00.596 10:29:01 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:18:00.855 EAL: lib.eal log level changed from notice to debug 00:18:00.855 EAL: Detected lcore 0 as core 0 on socket 0 00:18:00.855 EAL: Detected lcore 1 as core 1 on socket 0 00:18:00.855 EAL: Detected lcore 2 as core 2 on socket 0 00:18:00.855 EAL: Detected lcore 3 as core 3 on socket 0 00:18:00.855 EAL: Detected lcore 4 as core 4 on socket 0 00:18:00.855 EAL: Detected lcore 5 as core 5 on socket 0 00:18:00.855 EAL: Detected lcore 6 as core 6 on socket 0 00:18:00.855 EAL: Detected lcore 7 as core 8 on socket 0 00:18:00.855 EAL: Detected lcore 8 as core 9 on socket 0 00:18:00.855 EAL: Detected lcore 9 as core 10 on socket 0 00:18:00.855 EAL: Detected lcore 10 as core 11 on socket 0 00:18:00.855 EAL: Detected lcore 11 as core 12 on socket 0 00:18:00.855 EAL: Detected lcore 12 as core 13 on socket 0 00:18:00.855 EAL: Detected lcore 13 as core 16 on socket 0 00:18:00.855 EAL: Detected lcore 14 as core 17 on socket 0 00:18:00.855 EAL: Detected lcore 15 as core 18 on socket 0 00:18:00.855 EAL: Detected lcore 16 as core 19 on socket 0 00:18:00.855 EAL: Detected lcore 17 as core 20 on socket 0 00:18:00.855 EAL: Detected lcore 18 as core 21 on socket 0 00:18:00.855 EAL: Detected lcore 19 as core 25 on socket 0 00:18:00.855 EAL: Detected lcore 20 as core 26 on socket 0 00:18:00.855 EAL: Detected lcore 21 as core 27 on socket 0 00:18:00.855 EAL: Detected lcore 22 as core 28 on socket 0 00:18:00.855 EAL: Detected lcore 23 as core 29 on socket 0 00:18:00.855 EAL: Detected lcore 24 as core 0 on socket 1 00:18:00.855 EAL: Detected lcore 25 as core 1 on socket 1 00:18:00.855 EAL: Detected lcore 26 as core 2 on socket 1 00:18:00.855 EAL: Detected lcore 27 as core 3 on socket 1 00:18:00.855 EAL: Detected lcore 28 as core 4 on socket 1 00:18:00.855 EAL: Detected lcore 29 as core 5 on socket 1 00:18:00.855 EAL: Detected lcore 30 as core 6 on socket 1 00:18:00.855 EAL: Detected lcore 31 as core 9 on socket 1 00:18:00.855 EAL: Detected lcore 32 as core 10 on socket 1 00:18:00.855 EAL: Detected lcore 33 as core 11 on socket 1 00:18:00.855 EAL: Detected lcore 34 as core 12 on socket 1 00:18:00.855 EAL: Detected lcore 35 as core 13 on socket 1 00:18:00.855 EAL: Detected lcore 36 as core 16 on socket 1 00:18:00.855 EAL: Detected lcore 37 as core 17 on socket 1 00:18:00.855 EAL: Detected lcore 38 as core 18 on socket 1 00:18:00.855 EAL: Detected lcore 39 as core 19 on socket 1 00:18:00.855 EAL: Detected lcore 40 as core 20 on socket 1 00:18:00.855 EAL: Detected lcore 41 as core 21 on socket 1 00:18:00.855 EAL: Detected lcore 42 as core 24 on socket 1 00:18:00.855 EAL: Detected lcore 43 as core 25 on socket 1 00:18:00.856 EAL: Detected lcore 44 as core 26 on socket 1 00:18:00.856 EAL: Detected lcore 45 as core 27 on socket 1 00:18:00.856 EAL: Detected lcore 46 as core 28 on socket 1 00:18:00.856 EAL: Detected lcore 47 as core 29 on socket 1 00:18:00.856 EAL: Detected lcore 48 as core 0 on socket 0 00:18:00.856 EAL: Detected lcore 49 as core 1 on socket 0 00:18:00.856 EAL: Detected lcore 50 as core 2 on socket 0 00:18:00.856 EAL: Detected lcore 51 as core 3 on socket 0 00:18:00.856 EAL: Detected lcore 52 as core 4 on socket 0 00:18:00.856 EAL: Detected lcore 53 as core 5 on socket 0 00:18:00.856 EAL: Detected lcore 54 as core 6 on socket 0 00:18:00.856 EAL: Detected lcore 55 as core 8 on socket 0 00:18:00.856 EAL: Detected lcore 56 as core 9 on socket 0 00:18:00.856 EAL: Detected lcore 57 as core 10 on socket 0 00:18:00.856 EAL: Detected lcore 58 as core 11 on socket 0 00:18:00.856 EAL: Detected lcore 59 as core 12 on socket 0 00:18:00.856 EAL: Detected lcore 60 as core 13 on socket 0 00:18:00.856 EAL: Detected lcore 61 as core 16 on socket 0 00:18:00.856 EAL: Detected lcore 62 as core 17 on socket 0 00:18:00.856 EAL: Detected lcore 63 as core 18 on socket 0 00:18:00.856 EAL: Detected lcore 64 as core 19 on socket 0 00:18:00.856 EAL: Detected lcore 65 as core 20 on socket 0 00:18:00.856 EAL: Detected lcore 66 as core 21 on socket 0 00:18:00.856 EAL: Detected lcore 67 as core 25 on socket 0 00:18:00.856 EAL: Detected lcore 68 as core 26 on socket 0 00:18:00.856 EAL: Detected lcore 69 as core 27 on socket 0 00:18:00.856 EAL: Detected lcore 70 as core 28 on socket 0 00:18:00.856 EAL: Detected lcore 71 as core 29 on socket 0 00:18:00.856 EAL: Detected lcore 72 as core 0 on socket 1 00:18:00.856 EAL: Detected lcore 73 as core 1 on socket 1 00:18:00.856 EAL: Detected lcore 74 as core 2 on socket 1 00:18:00.856 EAL: Detected lcore 75 as core 3 on socket 1 00:18:00.856 EAL: Detected lcore 76 as core 4 on socket 1 00:18:00.856 EAL: Detected lcore 77 as core 5 on socket 1 00:18:00.856 EAL: Detected lcore 78 as core 6 on socket 1 00:18:00.856 EAL: Detected lcore 79 as core 9 on socket 1 00:18:00.856 EAL: Detected lcore 80 as core 10 on socket 1 00:18:00.856 EAL: Detected lcore 81 as core 11 on socket 1 00:18:00.856 EAL: Detected lcore 82 as core 12 on socket 1 00:18:00.856 EAL: Detected lcore 83 as core 13 on socket 1 00:18:00.856 EAL: Detected lcore 84 as core 16 on socket 1 00:18:00.856 EAL: Detected lcore 85 as core 17 on socket 1 00:18:00.856 EAL: Detected lcore 86 as core 18 on socket 1 00:18:00.856 EAL: Detected lcore 87 as core 19 on socket 1 00:18:00.856 EAL: Detected lcore 88 as core 20 on socket 1 00:18:00.856 EAL: Detected lcore 89 as core 21 on socket 1 00:18:00.856 EAL: Detected lcore 90 as core 24 on socket 1 00:18:00.856 EAL: Detected lcore 91 as core 25 on socket 1 00:18:00.856 EAL: Detected lcore 92 as core 26 on socket 1 00:18:00.856 EAL: Detected lcore 93 as core 27 on socket 1 00:18:00.856 EAL: Detected lcore 94 as core 28 on socket 1 00:18:00.856 EAL: Detected lcore 95 as core 29 on socket 1 00:18:00.856 EAL: Maximum logical cores by configuration: 128 00:18:00.856 EAL: Detected CPU lcores: 96 00:18:00.856 EAL: Detected NUMA nodes: 2 00:18:00.856 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:18:00.856 EAL: Detected shared linkage of DPDK 00:18:00.856 EAL: No shared files mode enabled, IPC will be disabled 00:18:00.856 EAL: Bus pci wants IOVA as 'DC' 00:18:00.856 EAL: Buses did not request a specific IOVA mode. 00:18:00.856 EAL: IOMMU is available, selecting IOVA as VA mode. 00:18:00.856 EAL: Selected IOVA mode 'VA' 00:18:00.856 EAL: Probing VFIO support... 00:18:00.856 EAL: IOMMU type 1 (Type 1) is supported 00:18:00.856 EAL: IOMMU type 7 (sPAPR) is not supported 00:18:00.856 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:18:00.856 EAL: VFIO support initialized 00:18:00.856 EAL: Ask a virtual area of 0x2e000 bytes 00:18:00.856 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:18:00.856 EAL: Setting up physically contiguous memory... 00:18:00.856 EAL: Setting maximum number of open files to 524288 00:18:00.856 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:18:00.856 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:18:00.856 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:18:00.856 EAL: Ask a virtual area of 0x61000 bytes 00:18:00.856 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:18:00.856 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:00.856 EAL: Ask a virtual area of 0x400000000 bytes 00:18:00.856 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:18:00.856 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:18:00.856 EAL: Ask a virtual area of 0x61000 bytes 00:18:00.856 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:18:00.856 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:00.856 EAL: Ask a virtual area of 0x400000000 bytes 00:18:00.856 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:18:00.856 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:18:00.856 EAL: Ask a virtual area of 0x61000 bytes 00:18:00.856 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:18:00.856 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:00.856 EAL: Ask a virtual area of 0x400000000 bytes 00:18:00.856 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:18:00.856 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:18:00.856 EAL: Ask a virtual area of 0x61000 bytes 00:18:00.856 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:18:00.856 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:00.856 EAL: Ask a virtual area of 0x400000000 bytes 00:18:00.856 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:18:00.856 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:18:00.856 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:18:00.856 EAL: Ask a virtual area of 0x61000 bytes 00:18:00.856 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:18:00.856 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:18:00.856 EAL: Ask a virtual area of 0x400000000 bytes 00:18:00.856 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:18:00.856 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:18:00.856 EAL: Ask a virtual area of 0x61000 bytes 00:18:00.856 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:18:00.856 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:18:00.856 EAL: Ask a virtual area of 0x400000000 bytes 00:18:00.856 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:18:00.856 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:18:00.856 EAL: Ask a virtual area of 0x61000 bytes 00:18:00.856 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:18:00.856 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:18:00.856 EAL: Ask a virtual area of 0x400000000 bytes 00:18:00.856 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:18:00.856 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:18:00.856 EAL: Ask a virtual area of 0x61000 bytes 00:18:00.856 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:18:00.856 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:18:00.856 EAL: Ask a virtual area of 0x400000000 bytes 00:18:00.856 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:18:00.856 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:18:00.856 EAL: Hugepages will be freed exactly as allocated. 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: TSC frequency is ~2300000 KHz 00:18:00.856 EAL: Main lcore 0 is ready (tid=7f8d770e7a00;cpuset=[0]) 00:18:00.856 EAL: Trying to obtain current memory policy. 00:18:00.856 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:00.856 EAL: Restoring previous memory policy: 0 00:18:00.856 EAL: request: mp_malloc_sync 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: Heap on socket 0 was expanded by 2MB 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: No PCI address specified using 'addr=' in: bus=pci 00:18:00.856 EAL: Mem event callback 'spdk:(nil)' registered 00:18:00.856 00:18:00.856 00:18:00.856 CUnit - A unit testing framework for C - Version 2.1-3 00:18:00.856 http://cunit.sourceforge.net/ 00:18:00.856 00:18:00.856 00:18:00.856 Suite: components_suite 00:18:00.856 Test: vtophys_malloc_test ...passed 00:18:00.856 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:18:00.856 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:00.856 EAL: Restoring previous memory policy: 4 00:18:00.856 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.856 EAL: request: mp_malloc_sync 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: Heap on socket 0 was expanded by 4MB 00:18:00.856 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.856 EAL: request: mp_malloc_sync 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: Heap on socket 0 was shrunk by 4MB 00:18:00.856 EAL: Trying to obtain current memory policy. 00:18:00.856 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:00.856 EAL: Restoring previous memory policy: 4 00:18:00.856 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.856 EAL: request: mp_malloc_sync 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: Heap on socket 0 was expanded by 6MB 00:18:00.856 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.856 EAL: request: mp_malloc_sync 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: Heap on socket 0 was shrunk by 6MB 00:18:00.856 EAL: Trying to obtain current memory policy. 00:18:00.856 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:00.856 EAL: Restoring previous memory policy: 4 00:18:00.856 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.856 EAL: request: mp_malloc_sync 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: Heap on socket 0 was expanded by 10MB 00:18:00.856 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.856 EAL: request: mp_malloc_sync 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: Heap on socket 0 was shrunk by 10MB 00:18:00.856 EAL: Trying to obtain current memory policy. 00:18:00.856 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:00.856 EAL: Restoring previous memory policy: 4 00:18:00.856 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.856 EAL: request: mp_malloc_sync 00:18:00.856 EAL: No shared files mode enabled, IPC is disabled 00:18:00.856 EAL: Heap on socket 0 was expanded by 18MB 00:18:00.856 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.857 EAL: request: mp_malloc_sync 00:18:00.857 EAL: No shared files mode enabled, IPC is disabled 00:18:00.857 EAL: Heap on socket 0 was shrunk by 18MB 00:18:00.857 EAL: Trying to obtain current memory policy. 00:18:00.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:00.857 EAL: Restoring previous memory policy: 4 00:18:00.857 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.857 EAL: request: mp_malloc_sync 00:18:00.857 EAL: No shared files mode enabled, IPC is disabled 00:18:00.857 EAL: Heap on socket 0 was expanded by 34MB 00:18:00.857 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.857 EAL: request: mp_malloc_sync 00:18:00.857 EAL: No shared files mode enabled, IPC is disabled 00:18:00.857 EAL: Heap on socket 0 was shrunk by 34MB 00:18:00.857 EAL: Trying to obtain current memory policy. 00:18:00.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:00.857 EAL: Restoring previous memory policy: 4 00:18:00.857 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.857 EAL: request: mp_malloc_sync 00:18:00.857 EAL: No shared files mode enabled, IPC is disabled 00:18:00.857 EAL: Heap on socket 0 was expanded by 66MB 00:18:00.857 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.857 EAL: request: mp_malloc_sync 00:18:00.857 EAL: No shared files mode enabled, IPC is disabled 00:18:00.857 EAL: Heap on socket 0 was shrunk by 66MB 00:18:00.857 EAL: Trying to obtain current memory policy. 00:18:00.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:00.857 EAL: Restoring previous memory policy: 4 00:18:00.857 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.857 EAL: request: mp_malloc_sync 00:18:00.857 EAL: No shared files mode enabled, IPC is disabled 00:18:00.857 EAL: Heap on socket 0 was expanded by 130MB 00:18:00.857 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.857 EAL: request: mp_malloc_sync 00:18:00.857 EAL: No shared files mode enabled, IPC is disabled 00:18:00.857 EAL: Heap on socket 0 was shrunk by 130MB 00:18:00.857 EAL: Trying to obtain current memory policy. 00:18:00.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:00.857 EAL: Restoring previous memory policy: 4 00:18:00.857 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.857 EAL: request: mp_malloc_sync 00:18:00.857 EAL: No shared files mode enabled, IPC is disabled 00:18:00.857 EAL: Heap on socket 0 was expanded by 258MB 00:18:01.114 EAL: Calling mem event callback 'spdk:(nil)' 00:18:01.114 EAL: request: mp_malloc_sync 00:18:01.114 EAL: No shared files mode enabled, IPC is disabled 00:18:01.114 EAL: Heap on socket 0 was shrunk by 258MB 00:18:01.114 EAL: Trying to obtain current memory policy. 00:18:01.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:01.114 EAL: Restoring previous memory policy: 4 00:18:01.114 EAL: Calling mem event callback 'spdk:(nil)' 00:18:01.114 EAL: request: mp_malloc_sync 00:18:01.114 EAL: No shared files mode enabled, IPC is disabled 00:18:01.114 EAL: Heap on socket 0 was expanded by 514MB 00:18:01.114 EAL: Calling mem event callback 'spdk:(nil)' 00:18:01.372 EAL: request: mp_malloc_sync 00:18:01.372 EAL: No shared files mode enabled, IPC is disabled 00:18:01.372 EAL: Heap on socket 0 was shrunk by 514MB 00:18:01.372 EAL: Trying to obtain current memory policy. 00:18:01.372 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:01.372 EAL: Restoring previous memory policy: 4 00:18:01.372 EAL: Calling mem event callback 'spdk:(nil)' 00:18:01.372 EAL: request: mp_malloc_sync 00:18:01.372 EAL: No shared files mode enabled, IPC is disabled 00:18:01.372 EAL: Heap on socket 0 was expanded by 1026MB 00:18:01.630 EAL: Calling mem event callback 'spdk:(nil)' 00:18:01.888 EAL: request: mp_malloc_sync 00:18:01.888 EAL: No shared files mode enabled, IPC is disabled 00:18:01.888 EAL: Heap on socket 0 was shrunk by 1026MB 00:18:01.888 passed 00:18:01.888 00:18:01.888 Run Summary: Type Total Ran Passed Failed Inactive 00:18:01.888 suites 1 1 n/a 0 0 00:18:01.888 tests 2 2 2 0 0 00:18:01.888 asserts 497 497 497 0 n/a 00:18:01.888 00:18:01.888 Elapsed time = 0.964 seconds 00:18:01.888 EAL: Calling mem event callback 'spdk:(nil)' 00:18:01.888 EAL: request: mp_malloc_sync 00:18:01.888 EAL: No shared files mode enabled, IPC is disabled 00:18:01.888 EAL: Heap on socket 0 was shrunk by 2MB 00:18:01.888 EAL: No shared files mode enabled, IPC is disabled 00:18:01.888 EAL: No shared files mode enabled, IPC is disabled 00:18:01.888 EAL: No shared files mode enabled, IPC is disabled 00:18:01.888 00:18:01.888 real 0m1.088s 00:18:01.888 user 0m0.643s 00:18:01.888 sys 0m0.415s 00:18:01.888 10:29:02 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.888 10:29:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:18:01.888 ************************************ 00:18:01.888 END TEST env_vtophys 00:18:01.888 ************************************ 00:18:01.888 10:29:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:18:01.888 10:29:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:01.888 10:29:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.888 10:29:02 env -- common/autotest_common.sh@10 -- # set +x 00:18:01.888 ************************************ 00:18:01.888 START TEST env_pci 00:18:01.888 ************************************ 00:18:01.888 10:29:02 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:18:01.888 00:18:01.888 00:18:01.888 CUnit - A unit testing framework for C - Version 2.1-3 00:18:01.888 http://cunit.sourceforge.net/ 00:18:01.888 00:18:01.888 00:18:01.888 Suite: pci 00:18:01.888 Test: pci_hook ...[2024-12-09 10:29:02.937897] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 465306 has claimed it 00:18:01.888 EAL: Cannot find device (10000:00:01.0) 00:18:01.888 EAL: Failed to attach device on primary process 00:18:01.888 passed 00:18:01.888 00:18:01.888 Run Summary: Type Total Ran Passed Failed Inactive 00:18:01.888 suites 1 1 n/a 0 0 00:18:01.888 tests 1 1 1 0 0 00:18:01.888 asserts 25 25 25 0 n/a 00:18:01.888 00:18:01.888 Elapsed time = 0.026 seconds 00:18:01.888 00:18:01.888 real 0m0.046s 00:18:01.888 user 0m0.014s 00:18:01.888 sys 0m0.031s 00:18:01.888 10:29:02 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.888 10:29:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:18:01.888 ************************************ 00:18:01.888 END TEST env_pci 00:18:01.888 ************************************ 00:18:01.888 10:29:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:18:01.888 10:29:02 env -- env/env.sh@15 -- # uname 00:18:01.889 10:29:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:18:01.889 10:29:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:18:01.889 10:29:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:18:01.889 10:29:03 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:01.889 10:29:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.889 10:29:03 env -- common/autotest_common.sh@10 -- # set +x 00:18:01.889 ************************************ 00:18:01.889 START TEST env_dpdk_post_init 00:18:01.889 ************************************ 00:18:01.889 10:29:03 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:18:02.147 EAL: Detected CPU lcores: 96 00:18:02.147 EAL: Detected NUMA nodes: 2 00:18:02.147 EAL: Detected shared linkage of DPDK 00:18:02.147 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:18:02.147 EAL: Selected IOVA mode 'VA' 00:18:02.147 EAL: VFIO support initialized 00:18:02.147 TELEMETRY: No legacy callbacks, legacy socket not created 00:18:02.147 EAL: Using IOMMU type 1 (Type 1) 00:18:02.147 EAL: Ignore mapping IO port bar(1) 00:18:02.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:18:02.147 EAL: Ignore mapping IO port bar(1) 00:18:02.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:18:02.147 EAL: Ignore mapping IO port bar(1) 00:18:02.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:18:02.147 EAL: Ignore mapping IO port bar(1) 00:18:02.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:18:02.147 EAL: Ignore mapping IO port bar(1) 00:18:02.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:18:02.147 EAL: Ignore mapping IO port bar(1) 00:18:02.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:18:02.147 EAL: Ignore mapping IO port bar(1) 00:18:02.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:18:02.147 EAL: Ignore mapping IO port bar(1) 00:18:02.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:18:03.079 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:18:03.079 EAL: Ignore mapping IO port bar(1) 00:18:03.079 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:18:03.079 EAL: Ignore mapping IO port bar(1) 00:18:03.079 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:18:03.079 EAL: Ignore mapping IO port bar(1) 00:18:03.079 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:18:03.079 EAL: Ignore mapping IO port bar(1) 00:18:03.079 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:18:03.079 EAL: Ignore mapping IO port bar(1) 00:18:03.079 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:18:03.079 EAL: Ignore mapping IO port bar(1) 00:18:03.079 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:18:03.079 EAL: Ignore mapping IO port bar(1) 00:18:03.079 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:18:03.079 EAL: Ignore mapping IO port bar(1) 00:18:03.079 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:18:06.360 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:18:06.360 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:18:06.360 Starting DPDK initialization... 00:18:06.360 Starting SPDK post initialization... 00:18:06.360 SPDK NVMe probe 00:18:06.360 Attaching to 0000:5e:00.0 00:18:06.360 Attached to 0000:5e:00.0 00:18:06.360 Cleaning up... 00:18:06.360 00:18:06.360 real 0m4.382s 00:18:06.360 user 0m3.000s 00:18:06.360 sys 0m0.452s 00:18:06.360 10:29:07 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.360 10:29:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.360 ************************************ 00:18:06.360 END TEST env_dpdk_post_init 00:18:06.360 ************************************ 00:18:06.360 10:29:07 env -- env/env.sh@26 -- # uname 00:18:06.360 10:29:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:18:06.360 10:29:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:18:06.360 10:29:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:06.360 10:29:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.360 10:29:07 env -- common/autotest_common.sh@10 -- # set +x 00:18:06.360 ************************************ 00:18:06.360 START TEST env_mem_callbacks 00:18:06.360 ************************************ 00:18:06.360 10:29:07 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:18:06.360 EAL: Detected CPU lcores: 96 00:18:06.360 EAL: Detected NUMA nodes: 2 00:18:06.360 EAL: Detected shared linkage of DPDK 00:18:06.360 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:18:06.360 EAL: Selected IOVA mode 'VA' 00:18:06.360 EAL: VFIO support initialized 00:18:06.360 TELEMETRY: No legacy callbacks, legacy socket not created 00:18:06.360 00:18:06.360 00:18:06.360 CUnit - A unit testing framework for C - Version 2.1-3 00:18:06.360 http://cunit.sourceforge.net/ 00:18:06.360 00:18:06.360 00:18:06.360 Suite: memory 00:18:06.360 Test: test ... 00:18:06.360 register 0x200000200000 2097152 00:18:06.360 malloc 3145728 00:18:06.619 register 0x200000400000 4194304 00:18:06.619 buf 0x200000500000 len 3145728 PASSED 00:18:06.619 malloc 64 00:18:06.619 buf 0x2000004fff40 len 64 PASSED 00:18:06.619 malloc 4194304 00:18:06.619 register 0x200000800000 6291456 00:18:06.619 buf 0x200000a00000 len 4194304 PASSED 00:18:06.619 free 0x200000500000 3145728 00:18:06.619 free 0x2000004fff40 64 00:18:06.619 unregister 0x200000400000 4194304 PASSED 00:18:06.619 free 0x200000a00000 4194304 00:18:06.619 unregister 0x200000800000 6291456 PASSED 00:18:06.619 malloc 8388608 00:18:06.619 register 0x200000400000 10485760 00:18:06.619 buf 0x200000600000 len 8388608 PASSED 00:18:06.619 free 0x200000600000 8388608 00:18:06.619 unregister 0x200000400000 10485760 PASSED 00:18:06.619 passed 00:18:06.619 00:18:06.619 Run Summary: Type Total Ran Passed Failed Inactive 00:18:06.619 suites 1 1 n/a 0 0 00:18:06.619 tests 1 1 1 0 0 00:18:06.619 asserts 15 15 15 0 n/a 00:18:06.619 00:18:06.619 Elapsed time = 0.005 seconds 00:18:06.619 00:18:06.619 real 0m0.043s 00:18:06.619 user 0m0.016s 00:18:06.619 sys 0m0.027s 00:18:06.619 10:29:07 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.619 10:29:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:18:06.619 ************************************ 00:18:06.619 END TEST env_mem_callbacks 00:18:06.619 ************************************ 00:18:06.619 00:18:06.619 real 0m6.509s 00:18:06.619 user 0m4.323s 00:18:06.619 sys 0m1.257s 00:18:06.619 10:29:07 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.619 10:29:07 env -- common/autotest_common.sh@10 -- # set +x 00:18:06.619 ************************************ 00:18:06.619 END TEST env 00:18:06.619 ************************************ 00:18:06.619 10:29:07 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:18:06.619 10:29:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:06.619 10:29:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.619 10:29:07 -- common/autotest_common.sh@10 -- # set +x 00:18:06.619 ************************************ 00:18:06.619 START TEST rpc 00:18:06.619 ************************************ 00:18:06.619 10:29:07 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:18:06.619 * Looking for test storage... 00:18:06.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:18:06.619 10:29:07 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:06.619 10:29:07 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:18:06.619 10:29:07 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:06.619 10:29:07 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:06.619 10:29:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.619 10:29:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.619 10:29:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.619 10:29:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.619 10:29:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.619 10:29:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.619 10:29:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.619 10:29:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.619 10:29:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.619 10:29:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.619 10:29:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.619 10:29:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:06.619 10:29:07 rpc -- scripts/common.sh@345 -- # : 1 00:18:06.619 10:29:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.619 10:29:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.619 10:29:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:18:06.619 10:29:07 rpc -- scripts/common.sh@353 -- # local d=1 00:18:06.619 10:29:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.619 10:29:07 rpc -- scripts/common.sh@355 -- # echo 1 00:18:06.619 10:29:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.619 10:29:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:18:06.878 10:29:07 rpc -- scripts/common.sh@353 -- # local d=2 00:18:06.878 10:29:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.878 10:29:07 rpc -- scripts/common.sh@355 -- # echo 2 00:18:06.878 10:29:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.878 10:29:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.878 10:29:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.878 10:29:07 rpc -- scripts/common.sh@368 -- # return 0 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:06.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.878 --rc genhtml_branch_coverage=1 00:18:06.878 --rc genhtml_function_coverage=1 00:18:06.878 --rc genhtml_legend=1 00:18:06.878 --rc geninfo_all_blocks=1 00:18:06.878 --rc geninfo_unexecuted_blocks=1 00:18:06.878 00:18:06.878 ' 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:06.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.878 --rc genhtml_branch_coverage=1 00:18:06.878 --rc genhtml_function_coverage=1 00:18:06.878 --rc genhtml_legend=1 00:18:06.878 --rc geninfo_all_blocks=1 00:18:06.878 --rc geninfo_unexecuted_blocks=1 00:18:06.878 00:18:06.878 ' 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:06.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.878 --rc genhtml_branch_coverage=1 00:18:06.878 --rc genhtml_function_coverage=1 00:18:06.878 --rc genhtml_legend=1 00:18:06.878 --rc geninfo_all_blocks=1 00:18:06.878 --rc geninfo_unexecuted_blocks=1 00:18:06.878 00:18:06.878 ' 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:06.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.878 --rc genhtml_branch_coverage=1 00:18:06.878 --rc genhtml_function_coverage=1 00:18:06.878 --rc genhtml_legend=1 00:18:06.878 --rc geninfo_all_blocks=1 00:18:06.878 --rc geninfo_unexecuted_blocks=1 00:18:06.878 00:18:06.878 ' 00:18:06.878 10:29:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=466184 00:18:06.878 10:29:07 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:18:06.878 10:29:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:06.878 10:29:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 466184 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@835 -- # '[' -z 466184 ']' 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.878 10:29:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.878 [2024-12-09 10:29:07.853223] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:06.878 [2024-12-09 10:29:07.853271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466184 ] 00:18:06.878 [2024-12-09 10:29:07.919146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.878 [2024-12-09 10:29:07.958856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:18:06.878 [2024-12-09 10:29:07.958894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 466184' to capture a snapshot of events at runtime. 00:18:06.878 [2024-12-09 10:29:07.958901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.878 [2024-12-09 10:29:07.958907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.878 [2024-12-09 10:29:07.958912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid466184 for offline analysis/debug. 00:18:06.878 [2024-12-09 10:29:07.959477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.138 10:29:08 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.138 10:29:08 rpc -- common/autotest_common.sh@868 -- # return 0 00:18:07.138 10:29:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:18:07.138 10:29:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:18:07.138 10:29:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:18:07.138 10:29:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:18:07.138 10:29:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:07.138 10:29:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.138 10:29:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.138 ************************************ 00:18:07.138 START TEST rpc_integrity 00:18:07.138 ************************************ 00:18:07.138 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:18:07.138 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:07.138 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.138 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:07.138 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.138 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:18:07.138 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:18:07.138 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:18:07.138 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:18:07.138 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.138 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:07.138 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.138 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:18:07.138 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:18:07.138 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.138 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:07.138 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.138 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:18:07.138 { 00:18:07.138 "name": "Malloc0", 00:18:07.138 "aliases": [ 00:18:07.138 "9f6c445b-2054-43d4-b936-8ef8f29818ef" 00:18:07.138 ], 00:18:07.138 "product_name": "Malloc disk", 00:18:07.138 "block_size": 512, 00:18:07.138 "num_blocks": 16384, 00:18:07.138 "uuid": "9f6c445b-2054-43d4-b936-8ef8f29818ef", 00:18:07.138 "assigned_rate_limits": { 00:18:07.138 "rw_ios_per_sec": 0, 00:18:07.138 "rw_mbytes_per_sec": 0, 00:18:07.138 "r_mbytes_per_sec": 0, 00:18:07.138 "w_mbytes_per_sec": 0 00:18:07.138 }, 00:18:07.138 "claimed": false, 00:18:07.138 "zoned": false, 00:18:07.138 "supported_io_types": { 00:18:07.138 "read": true, 00:18:07.138 "write": true, 00:18:07.138 "unmap": true, 00:18:07.138 "flush": true, 00:18:07.138 "reset": true, 00:18:07.138 "nvme_admin": false, 00:18:07.138 "nvme_io": false, 00:18:07.138 "nvme_io_md": false, 00:18:07.138 "write_zeroes": true, 00:18:07.138 "zcopy": true, 00:18:07.138 "get_zone_info": false, 00:18:07.138 "zone_management": false, 00:18:07.138 "zone_append": false, 00:18:07.138 "compare": false, 00:18:07.138 "compare_and_write": false, 00:18:07.138 "abort": true, 00:18:07.138 "seek_hole": false, 00:18:07.138 "seek_data": false, 00:18:07.138 "copy": true, 00:18:07.138 "nvme_iov_md": false 00:18:07.138 }, 00:18:07.138 "memory_domains": [ 00:18:07.138 { 00:18:07.138 "dma_device_id": "system", 00:18:07.138 "dma_device_type": 1 00:18:07.138 }, 00:18:07.138 { 00:18:07.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.138 "dma_device_type": 2 00:18:07.138 } 00:18:07.138 ], 00:18:07.138 "driver_specific": {} 00:18:07.138 } 00:18:07.138 ]' 00:18:07.138 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:18:07.398 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:18:07.398 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:18:07.398 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.398 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:07.398 [2024-12-09 10:29:08.325336] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:18:07.398 [2024-12-09 10:29:08.325369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.398 [2024-12-09 10:29:08.325382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfab100 00:18:07.398 [2024-12-09 10:29:08.325388] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.398 [2024-12-09 10:29:08.326500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.398 [2024-12-09 10:29:08.326523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:18:07.398 Passthru0 00:18:07.398 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.398 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:18:07.398 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.398 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:07.398 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.398 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:18:07.398 { 00:18:07.398 "name": "Malloc0", 00:18:07.398 "aliases": [ 00:18:07.398 "9f6c445b-2054-43d4-b936-8ef8f29818ef" 00:18:07.398 ], 00:18:07.398 "product_name": "Malloc disk", 00:18:07.398 "block_size": 512, 00:18:07.398 "num_blocks": 16384, 00:18:07.398 "uuid": "9f6c445b-2054-43d4-b936-8ef8f29818ef", 00:18:07.398 "assigned_rate_limits": { 00:18:07.398 "rw_ios_per_sec": 0, 00:18:07.398 "rw_mbytes_per_sec": 0, 00:18:07.398 "r_mbytes_per_sec": 0, 00:18:07.398 "w_mbytes_per_sec": 0 00:18:07.398 }, 00:18:07.398 "claimed": true, 00:18:07.398 "claim_type": "exclusive_write", 00:18:07.398 "zoned": false, 00:18:07.398 "supported_io_types": { 00:18:07.398 "read": true, 00:18:07.398 "write": true, 00:18:07.398 "unmap": true, 00:18:07.398 "flush": true, 00:18:07.398 "reset": true, 00:18:07.398 "nvme_admin": false, 00:18:07.398 "nvme_io": false, 00:18:07.398 "nvme_io_md": false, 00:18:07.398 "write_zeroes": true, 00:18:07.398 "zcopy": true, 00:18:07.398 "get_zone_info": false, 00:18:07.398 "zone_management": false, 00:18:07.398 "zone_append": false, 00:18:07.398 "compare": false, 00:18:07.398 "compare_and_write": false, 00:18:07.398 "abort": true, 00:18:07.398 "seek_hole": false, 00:18:07.398 "seek_data": false, 00:18:07.398 "copy": true, 00:18:07.398 "nvme_iov_md": false 00:18:07.398 }, 00:18:07.398 "memory_domains": [ 00:18:07.398 { 00:18:07.398 "dma_device_id": "system", 00:18:07.398 "dma_device_type": 1 00:18:07.398 }, 00:18:07.398 { 00:18:07.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.398 "dma_device_type": 2 00:18:07.398 } 00:18:07.398 ], 00:18:07.398 "driver_specific": {} 00:18:07.398 }, 00:18:07.398 { 00:18:07.398 "name": "Passthru0", 00:18:07.398 "aliases": [ 00:18:07.398 "a645878c-eff4-50c3-a81e-4c89e13a36d7" 00:18:07.398 ], 00:18:07.398 "product_name": "passthru", 00:18:07.398 "block_size": 512, 00:18:07.398 "num_blocks": 16384, 00:18:07.398 "uuid": "a645878c-eff4-50c3-a81e-4c89e13a36d7", 00:18:07.398 "assigned_rate_limits": { 00:18:07.398 "rw_ios_per_sec": 0, 00:18:07.398 "rw_mbytes_per_sec": 0, 00:18:07.398 "r_mbytes_per_sec": 0, 00:18:07.398 "w_mbytes_per_sec": 0 00:18:07.398 }, 00:18:07.398 "claimed": false, 00:18:07.398 "zoned": false, 00:18:07.398 "supported_io_types": { 00:18:07.398 "read": true, 00:18:07.398 "write": true, 00:18:07.398 "unmap": true, 00:18:07.398 "flush": true, 00:18:07.398 "reset": true, 00:18:07.398 "nvme_admin": false, 00:18:07.398 "nvme_io": false, 00:18:07.398 "nvme_io_md": false, 00:18:07.398 "write_zeroes": true, 00:18:07.398 "zcopy": true, 00:18:07.398 "get_zone_info": false, 00:18:07.398 "zone_management": false, 00:18:07.398 "zone_append": false, 00:18:07.398 "compare": false, 00:18:07.398 "compare_and_write": false, 00:18:07.398 "abort": true, 00:18:07.398 "seek_hole": false, 00:18:07.398 "seek_data": false, 00:18:07.398 "copy": true, 00:18:07.398 "nvme_iov_md": false 00:18:07.398 }, 00:18:07.398 "memory_domains": [ 00:18:07.398 { 00:18:07.398 "dma_device_id": "system", 00:18:07.398 "dma_device_type": 1 00:18:07.398 }, 00:18:07.398 { 00:18:07.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.398 "dma_device_type": 2 00:18:07.398 } 00:18:07.398 ], 00:18:07.398 "driver_specific": { 00:18:07.398 "passthru": { 00:18:07.398 "name": "Passthru0", 00:18:07.398 "base_bdev_name": "Malloc0" 00:18:07.398 } 00:18:07.399 } 00:18:07.399 } 00:18:07.399 ]' 00:18:07.399 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:18:07.399 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:18:07.399 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.399 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.399 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.399 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:18:07.399 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:18:07.399 10:29:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:18:07.399 00:18:07.399 real 0m0.271s 00:18:07.399 user 0m0.171s 00:18:07.399 sys 0m0.043s 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.399 10:29:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:07.399 ************************************ 00:18:07.399 END TEST rpc_integrity 00:18:07.399 ************************************ 00:18:07.399 10:29:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:18:07.399 10:29:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:07.399 10:29:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.399 10:29:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.399 ************************************ 00:18:07.399 START TEST rpc_plugins 00:18:07.399 ************************************ 00:18:07.399 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:18:07.399 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:18:07.399 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.399 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:07.399 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.399 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:18:07.399 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:18:07.399 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.399 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:07.399 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.399 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:18:07.399 { 00:18:07.399 "name": "Malloc1", 00:18:07.399 "aliases": [ 00:18:07.399 "0fdf8126-125c-44ac-94ec-a00df8c171b5" 00:18:07.399 ], 00:18:07.399 "product_name": "Malloc disk", 00:18:07.399 "block_size": 4096, 00:18:07.399 "num_blocks": 256, 00:18:07.399 "uuid": "0fdf8126-125c-44ac-94ec-a00df8c171b5", 00:18:07.399 "assigned_rate_limits": { 00:18:07.399 "rw_ios_per_sec": 0, 00:18:07.399 "rw_mbytes_per_sec": 0, 00:18:07.399 "r_mbytes_per_sec": 0, 00:18:07.399 "w_mbytes_per_sec": 0 00:18:07.399 }, 00:18:07.399 "claimed": false, 00:18:07.399 "zoned": false, 00:18:07.399 "supported_io_types": { 00:18:07.399 "read": true, 00:18:07.399 "write": true, 00:18:07.399 "unmap": true, 00:18:07.399 "flush": true, 00:18:07.399 "reset": true, 00:18:07.399 "nvme_admin": false, 00:18:07.399 "nvme_io": false, 00:18:07.399 "nvme_io_md": false, 00:18:07.399 "write_zeroes": true, 00:18:07.399 "zcopy": true, 00:18:07.399 "get_zone_info": false, 00:18:07.399 "zone_management": false, 00:18:07.399 "zone_append": false, 00:18:07.399 "compare": false, 00:18:07.399 "compare_and_write": false, 00:18:07.399 "abort": true, 00:18:07.399 "seek_hole": false, 00:18:07.399 "seek_data": false, 00:18:07.399 "copy": true, 00:18:07.399 "nvme_iov_md": false 00:18:07.399 }, 00:18:07.399 "memory_domains": [ 00:18:07.399 { 00:18:07.399 "dma_device_id": "system", 00:18:07.399 "dma_device_type": 1 00:18:07.399 }, 00:18:07.399 { 00:18:07.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.399 "dma_device_type": 2 00:18:07.399 } 00:18:07.399 ], 00:18:07.399 "driver_specific": {} 00:18:07.399 } 00:18:07.399 ]' 00:18:07.399 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:18:07.658 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:18:07.658 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:18:07.658 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.658 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:07.658 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.658 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:18:07.658 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.658 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:07.658 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.658 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:18:07.658 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:18:07.658 10:29:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:18:07.658 00:18:07.658 real 0m0.149s 00:18:07.658 user 0m0.088s 00:18:07.658 sys 0m0.021s 00:18:07.658 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.659 10:29:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:07.659 ************************************ 00:18:07.659 END TEST rpc_plugins 00:18:07.659 ************************************ 00:18:07.659 10:29:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:18:07.659 10:29:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:07.659 10:29:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.659 10:29:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.659 ************************************ 00:18:07.659 START TEST rpc_trace_cmd_test 00:18:07.659 ************************************ 00:18:07.659 10:29:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:18:07.659 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:18:07.659 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:18:07.659 10:29:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.659 10:29:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.659 10:29:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.659 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:18:07.659 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid466184", 00:18:07.659 "tpoint_group_mask": "0x8", 00:18:07.659 "iscsi_conn": { 00:18:07.659 "mask": "0x2", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "scsi": { 00:18:07.659 "mask": "0x4", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "bdev": { 00:18:07.659 "mask": "0x8", 00:18:07.659 "tpoint_mask": "0xffffffffffffffff" 00:18:07.659 }, 00:18:07.659 "nvmf_rdma": { 00:18:07.659 "mask": "0x10", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "nvmf_tcp": { 00:18:07.659 "mask": "0x20", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "ftl": { 00:18:07.659 "mask": "0x40", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "blobfs": { 00:18:07.659 "mask": "0x80", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "dsa": { 00:18:07.659 "mask": "0x200", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "thread": { 00:18:07.659 "mask": "0x400", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "nvme_pcie": { 00:18:07.659 "mask": "0x800", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "iaa": { 00:18:07.659 "mask": "0x1000", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "nvme_tcp": { 00:18:07.659 "mask": "0x2000", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "bdev_nvme": { 00:18:07.659 "mask": "0x4000", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "sock": { 00:18:07.659 "mask": "0x8000", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "blob": { 00:18:07.659 "mask": "0x10000", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "bdev_raid": { 00:18:07.659 "mask": "0x20000", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 }, 00:18:07.659 "scheduler": { 00:18:07.659 "mask": "0x40000", 00:18:07.659 "tpoint_mask": "0x0" 00:18:07.659 } 00:18:07.659 }' 00:18:07.659 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:18:07.659 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:18:07.659 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:18:07.918 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:18:07.918 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:18:07.918 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:18:07.918 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:18:07.918 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:18:07.918 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:18:07.918 10:29:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:18:07.918 00:18:07.918 real 0m0.226s 00:18:07.918 user 0m0.190s 00:18:07.918 sys 0m0.028s 00:18:07.918 10:29:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.918 10:29:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.918 ************************************ 00:18:07.918 END TEST rpc_trace_cmd_test 00:18:07.918 ************************************ 00:18:07.918 10:29:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:18:07.918 10:29:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:18:07.918 10:29:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:18:07.918 10:29:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:07.918 10:29:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.918 10:29:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.918 ************************************ 00:18:07.918 START TEST rpc_daemon_integrity 00:18:07.918 ************************************ 00:18:07.918 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:18:07.918 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:07.918 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.918 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:07.918 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.918 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:18:07.918 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:18:08.178 { 00:18:08.178 "name": "Malloc2", 00:18:08.178 "aliases": [ 00:18:08.178 "cf938f6a-c431-495a-b5d3-3cf4c77dbd60" 00:18:08.178 ], 00:18:08.178 "product_name": "Malloc disk", 00:18:08.178 "block_size": 512, 00:18:08.178 "num_blocks": 16384, 00:18:08.178 "uuid": "cf938f6a-c431-495a-b5d3-3cf4c77dbd60", 00:18:08.178 "assigned_rate_limits": { 00:18:08.178 "rw_ios_per_sec": 0, 00:18:08.178 "rw_mbytes_per_sec": 0, 00:18:08.178 "r_mbytes_per_sec": 0, 00:18:08.178 "w_mbytes_per_sec": 0 00:18:08.178 }, 00:18:08.178 "claimed": false, 00:18:08.178 "zoned": false, 00:18:08.178 "supported_io_types": { 00:18:08.178 "read": true, 00:18:08.178 "write": true, 00:18:08.178 "unmap": true, 00:18:08.178 "flush": true, 00:18:08.178 "reset": true, 00:18:08.178 "nvme_admin": false, 00:18:08.178 "nvme_io": false, 00:18:08.178 "nvme_io_md": false, 00:18:08.178 "write_zeroes": true, 00:18:08.178 "zcopy": true, 00:18:08.178 "get_zone_info": false, 00:18:08.178 "zone_management": false, 00:18:08.178 "zone_append": false, 00:18:08.178 "compare": false, 00:18:08.178 "compare_and_write": false, 00:18:08.178 "abort": true, 00:18:08.178 "seek_hole": false, 00:18:08.178 "seek_data": false, 00:18:08.178 "copy": true, 00:18:08.178 "nvme_iov_md": false 00:18:08.178 }, 00:18:08.178 "memory_domains": [ 00:18:08.178 { 00:18:08.178 "dma_device_id": "system", 00:18:08.178 "dma_device_type": 1 00:18:08.178 }, 00:18:08.178 { 00:18:08.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.178 "dma_device_type": 2 00:18:08.178 } 00:18:08.178 ], 00:18:08.178 "driver_specific": {} 00:18:08.178 } 00:18:08.178 ]' 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:08.178 [2024-12-09 10:29:09.175629] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:18:08.178 [2024-12-09 10:29:09.175666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.178 [2024-12-09 10:29:09.175677] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe69450 00:18:08.178 [2024-12-09 10:29:09.175684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.178 [2024-12-09 10:29:09.176674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.178 [2024-12-09 10:29:09.176697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:18:08.178 Passthru0 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.178 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:18:08.178 { 00:18:08.178 "name": "Malloc2", 00:18:08.178 "aliases": [ 00:18:08.178 "cf938f6a-c431-495a-b5d3-3cf4c77dbd60" 00:18:08.178 ], 00:18:08.178 "product_name": "Malloc disk", 00:18:08.178 "block_size": 512, 00:18:08.178 "num_blocks": 16384, 00:18:08.178 "uuid": "cf938f6a-c431-495a-b5d3-3cf4c77dbd60", 00:18:08.178 "assigned_rate_limits": { 00:18:08.178 "rw_ios_per_sec": 0, 00:18:08.178 "rw_mbytes_per_sec": 0, 00:18:08.178 "r_mbytes_per_sec": 0, 00:18:08.178 "w_mbytes_per_sec": 0 00:18:08.178 }, 00:18:08.178 "claimed": true, 00:18:08.178 "claim_type": "exclusive_write", 00:18:08.178 "zoned": false, 00:18:08.178 "supported_io_types": { 00:18:08.178 "read": true, 00:18:08.178 "write": true, 00:18:08.178 "unmap": true, 00:18:08.178 "flush": true, 00:18:08.178 "reset": true, 00:18:08.178 "nvme_admin": false, 00:18:08.178 "nvme_io": false, 00:18:08.178 "nvme_io_md": false, 00:18:08.178 "write_zeroes": true, 00:18:08.178 "zcopy": true, 00:18:08.178 "get_zone_info": false, 00:18:08.178 "zone_management": false, 00:18:08.178 "zone_append": false, 00:18:08.178 "compare": false, 00:18:08.178 "compare_and_write": false, 00:18:08.178 "abort": true, 00:18:08.178 "seek_hole": false, 00:18:08.178 "seek_data": false, 00:18:08.178 "copy": true, 00:18:08.178 "nvme_iov_md": false 00:18:08.178 }, 00:18:08.178 "memory_domains": [ 00:18:08.178 { 00:18:08.178 "dma_device_id": "system", 00:18:08.178 "dma_device_type": 1 00:18:08.178 }, 00:18:08.178 { 00:18:08.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.178 "dma_device_type": 2 00:18:08.178 } 00:18:08.178 ], 00:18:08.178 "driver_specific": {} 00:18:08.178 }, 00:18:08.178 { 00:18:08.178 "name": "Passthru0", 00:18:08.178 "aliases": [ 00:18:08.178 "3da9925d-d71e-5119-9892-a2c8100c7be1" 00:18:08.178 ], 00:18:08.178 "product_name": "passthru", 00:18:08.178 "block_size": 512, 00:18:08.178 "num_blocks": 16384, 00:18:08.178 "uuid": "3da9925d-d71e-5119-9892-a2c8100c7be1", 00:18:08.178 "assigned_rate_limits": { 00:18:08.178 "rw_ios_per_sec": 0, 00:18:08.178 "rw_mbytes_per_sec": 0, 00:18:08.178 "r_mbytes_per_sec": 0, 00:18:08.178 "w_mbytes_per_sec": 0 00:18:08.178 }, 00:18:08.178 "claimed": false, 00:18:08.178 "zoned": false, 00:18:08.179 "supported_io_types": { 00:18:08.179 "read": true, 00:18:08.179 "write": true, 00:18:08.179 "unmap": true, 00:18:08.179 "flush": true, 00:18:08.179 "reset": true, 00:18:08.179 "nvme_admin": false, 00:18:08.179 "nvme_io": false, 00:18:08.179 "nvme_io_md": false, 00:18:08.179 "write_zeroes": true, 00:18:08.179 "zcopy": true, 00:18:08.179 "get_zone_info": false, 00:18:08.179 "zone_management": false, 00:18:08.179 "zone_append": false, 00:18:08.179 "compare": false, 00:18:08.179 "compare_and_write": false, 00:18:08.179 "abort": true, 00:18:08.179 "seek_hole": false, 00:18:08.179 "seek_data": false, 00:18:08.179 "copy": true, 00:18:08.179 "nvme_iov_md": false 00:18:08.179 }, 00:18:08.179 "memory_domains": [ 00:18:08.179 { 00:18:08.179 "dma_device_id": "system", 00:18:08.179 "dma_device_type": 1 00:18:08.179 }, 00:18:08.179 { 00:18:08.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.179 "dma_device_type": 2 00:18:08.179 } 00:18:08.179 ], 00:18:08.179 "driver_specific": { 00:18:08.179 "passthru": { 00:18:08.179 "name": "Passthru0", 00:18:08.179 "base_bdev_name": "Malloc2" 00:18:08.179 } 00:18:08.179 } 00:18:08.179 } 00:18:08.179 ]' 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:18:08.179 00:18:08.179 real 0m0.275s 00:18:08.179 user 0m0.171s 00:18:08.179 sys 0m0.041s 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.179 10:29:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:08.179 ************************************ 00:18:08.179 END TEST rpc_daemon_integrity 00:18:08.179 ************************************ 00:18:08.179 10:29:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:08.179 10:29:09 rpc -- rpc/rpc.sh@84 -- # killprocess 466184 00:18:08.179 10:29:09 rpc -- common/autotest_common.sh@954 -- # '[' -z 466184 ']' 00:18:08.179 10:29:09 rpc -- common/autotest_common.sh@958 -- # kill -0 466184 00:18:08.438 10:29:09 rpc -- common/autotest_common.sh@959 -- # uname 00:18:08.438 10:29:09 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.438 10:29:09 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466184 00:18:08.438 10:29:09 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.438 10:29:09 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.438 10:29:09 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466184' 00:18:08.438 killing process with pid 466184 00:18:08.438 10:29:09 rpc -- common/autotest_common.sh@973 -- # kill 466184 00:18:08.438 10:29:09 rpc -- common/autotest_common.sh@978 -- # wait 466184 00:18:08.698 00:18:08.698 real 0m2.098s 00:18:08.698 user 0m2.695s 00:18:08.698 sys 0m0.675s 00:18:08.699 10:29:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.699 10:29:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.699 ************************************ 00:18:08.699 END TEST rpc 00:18:08.699 ************************************ 00:18:08.699 10:29:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:18:08.699 10:29:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:08.699 10:29:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.699 10:29:09 -- common/autotest_common.sh@10 -- # set +x 00:18:08.699 ************************************ 00:18:08.699 START TEST skip_rpc 00:18:08.699 ************************************ 00:18:08.699 10:29:09 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:18:08.958 * Looking for test storage... 00:18:08.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@345 -- # : 1 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.958 10:29:09 skip_rpc -- scripts/common.sh@368 -- # return 0 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:08.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.958 --rc genhtml_branch_coverage=1 00:18:08.958 --rc genhtml_function_coverage=1 00:18:08.958 --rc genhtml_legend=1 00:18:08.958 --rc geninfo_all_blocks=1 00:18:08.958 --rc geninfo_unexecuted_blocks=1 00:18:08.958 00:18:08.958 ' 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:08.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.958 --rc genhtml_branch_coverage=1 00:18:08.958 --rc genhtml_function_coverage=1 00:18:08.958 --rc genhtml_legend=1 00:18:08.958 --rc geninfo_all_blocks=1 00:18:08.958 --rc geninfo_unexecuted_blocks=1 00:18:08.958 00:18:08.958 ' 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:08.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.958 --rc genhtml_branch_coverage=1 00:18:08.958 --rc genhtml_function_coverage=1 00:18:08.958 --rc genhtml_legend=1 00:18:08.958 --rc geninfo_all_blocks=1 00:18:08.958 --rc geninfo_unexecuted_blocks=1 00:18:08.958 00:18:08.958 ' 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:08.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.958 --rc genhtml_branch_coverage=1 00:18:08.958 --rc genhtml_function_coverage=1 00:18:08.958 --rc genhtml_legend=1 00:18:08.958 --rc geninfo_all_blocks=1 00:18:08.958 --rc geninfo_unexecuted_blocks=1 00:18:08.958 00:18:08.958 ' 00:18:08.958 10:29:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:18:08.958 10:29:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:18:08.958 10:29:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.958 10:29:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.958 ************************************ 00:18:08.958 START TEST skip_rpc 00:18:08.958 ************************************ 00:18:08.958 10:29:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:18:08.958 10:29:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=466771 00:18:08.958 10:29:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:08.958 10:29:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:18:08.958 10:29:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:18:08.959 [2024-12-09 10:29:10.053309] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:08.959 [2024-12-09 10:29:10.053363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466771 ] 00:18:08.959 [2024-12-09 10:29:10.119312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.218 [2024-12-09 10:29:10.161349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 466771 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 466771 ']' 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 466771 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466771 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466771' 00:18:14.553 killing process with pid 466771 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 466771 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 466771 00:18:14.553 00:18:14.553 real 0m5.388s 00:18:14.553 user 0m5.159s 00:18:14.553 sys 0m0.250s 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.553 10:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.553 ************************************ 00:18:14.553 END TEST skip_rpc 00:18:14.553 ************************************ 00:18:14.554 10:29:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:18:14.554 10:29:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:14.554 10:29:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.554 10:29:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.554 ************************************ 00:18:14.554 START TEST skip_rpc_with_json 00:18:14.554 ************************************ 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=467717 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 467717 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 467717 ']' 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.554 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:14.554 [2024-12-09 10:29:15.492687] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:14.554 [2024-12-09 10:29:15.492730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467717 ] 00:18:14.554 [2024-12-09 10:29:15.557968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.554 [2024-12-09 10:29:15.600457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:14.824 [2024-12-09 10:29:15.809806] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:18:14.824 request: 00:18:14.824 { 00:18:14.824 "trtype": "tcp", 00:18:14.824 "method": "nvmf_get_transports", 00:18:14.824 "req_id": 1 00:18:14.824 } 00:18:14.824 Got JSON-RPC error response 00:18:14.824 response: 00:18:14.824 { 00:18:14.824 "code": -19, 00:18:14.824 "message": "No such device" 00:18:14.824 } 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:14.824 [2024-12-09 10:29:15.821916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.824 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:18:14.824 { 00:18:14.824 "subsystems": [ 00:18:14.824 { 00:18:14.824 "subsystem": "fsdev", 00:18:14.824 "config": [ 00:18:14.824 { 00:18:14.824 "method": "fsdev_set_opts", 00:18:14.824 "params": { 00:18:14.824 "fsdev_io_pool_size": 65535, 00:18:14.824 "fsdev_io_cache_size": 256 00:18:14.824 } 00:18:14.824 } 00:18:14.824 ] 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "subsystem": "vfio_user_target", 00:18:14.824 "config": null 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "subsystem": "keyring", 00:18:14.824 "config": [] 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "subsystem": "iobuf", 00:18:14.824 "config": [ 00:18:14.824 { 00:18:14.824 "method": "iobuf_set_options", 00:18:14.824 "params": { 00:18:14.824 "small_pool_count": 8192, 00:18:14.824 "large_pool_count": 1024, 00:18:14.824 "small_bufsize": 8192, 00:18:14.824 "large_bufsize": 135168, 00:18:14.824 "enable_numa": false 00:18:14.824 } 00:18:14.824 } 00:18:14.824 ] 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "subsystem": "sock", 00:18:14.824 "config": [ 00:18:14.824 { 00:18:14.824 "method": "sock_set_default_impl", 00:18:14.824 "params": { 00:18:14.824 "impl_name": "posix" 00:18:14.824 } 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "method": "sock_impl_set_options", 00:18:14.824 "params": { 00:18:14.824 "impl_name": "ssl", 00:18:14.824 "recv_buf_size": 4096, 00:18:14.824 "send_buf_size": 4096, 00:18:14.824 "enable_recv_pipe": true, 00:18:14.824 "enable_quickack": false, 00:18:14.824 "enable_placement_id": 0, 00:18:14.824 "enable_zerocopy_send_server": true, 00:18:14.824 "enable_zerocopy_send_client": false, 00:18:14.824 "zerocopy_threshold": 0, 00:18:14.824 "tls_version": 0, 00:18:14.824 "enable_ktls": false 00:18:14.824 } 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "method": "sock_impl_set_options", 00:18:14.824 "params": { 00:18:14.824 "impl_name": "posix", 00:18:14.824 "recv_buf_size": 2097152, 00:18:14.824 "send_buf_size": 2097152, 00:18:14.824 "enable_recv_pipe": true, 00:18:14.824 "enable_quickack": false, 00:18:14.824 "enable_placement_id": 0, 00:18:14.824 "enable_zerocopy_send_server": true, 00:18:14.824 "enable_zerocopy_send_client": false, 00:18:14.824 "zerocopy_threshold": 0, 00:18:14.824 "tls_version": 0, 00:18:14.824 "enable_ktls": false 00:18:14.824 } 00:18:14.824 } 00:18:14.824 ] 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "subsystem": "vmd", 00:18:14.824 "config": [] 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "subsystem": "accel", 00:18:14.824 "config": [ 00:18:14.824 { 00:18:14.824 "method": "accel_set_options", 00:18:14.824 "params": { 00:18:14.824 "small_cache_size": 128, 00:18:14.824 "large_cache_size": 16, 00:18:14.824 "task_count": 2048, 00:18:14.824 "sequence_count": 2048, 00:18:14.824 "buf_count": 2048 00:18:14.824 } 00:18:14.824 } 00:18:14.824 ] 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "subsystem": "bdev", 00:18:14.824 "config": [ 00:18:14.824 { 00:18:14.824 "method": "bdev_set_options", 00:18:14.824 "params": { 00:18:14.824 "bdev_io_pool_size": 65535, 00:18:14.824 "bdev_io_cache_size": 256, 00:18:14.824 "bdev_auto_examine": true, 00:18:14.824 "iobuf_small_cache_size": 128, 00:18:14.824 "iobuf_large_cache_size": 16 00:18:14.824 } 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "method": "bdev_raid_set_options", 00:18:14.824 "params": { 00:18:14.824 "process_window_size_kb": 1024, 00:18:14.824 "process_max_bandwidth_mb_sec": 0 00:18:14.824 } 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "method": "bdev_iscsi_set_options", 00:18:14.824 "params": { 00:18:14.824 "timeout_sec": 30 00:18:14.824 } 00:18:14.824 }, 00:18:14.824 { 00:18:14.824 "method": "bdev_nvme_set_options", 00:18:14.824 "params": { 00:18:14.824 "action_on_timeout": "none", 00:18:14.824 "timeout_us": 0, 00:18:14.824 "timeout_admin_us": 0, 00:18:14.824 "keep_alive_timeout_ms": 10000, 00:18:14.824 "arbitration_burst": 0, 00:18:14.824 "low_priority_weight": 0, 00:18:14.824 "medium_priority_weight": 0, 00:18:14.824 "high_priority_weight": 0, 00:18:14.824 "nvme_adminq_poll_period_us": 10000, 00:18:14.824 "nvme_ioq_poll_period_us": 0, 00:18:14.824 "io_queue_requests": 0, 00:18:14.824 "delay_cmd_submit": true, 00:18:14.824 "transport_retry_count": 4, 00:18:14.824 "bdev_retry_count": 3, 00:18:14.824 "transport_ack_timeout": 0, 00:18:14.824 "ctrlr_loss_timeout_sec": 0, 00:18:14.824 "reconnect_delay_sec": 0, 00:18:14.824 "fast_io_fail_timeout_sec": 0, 00:18:14.824 "disable_auto_failback": false, 00:18:14.824 "generate_uuids": false, 00:18:14.824 "transport_tos": 0, 00:18:14.824 "nvme_error_stat": false, 00:18:14.824 "rdma_srq_size": 0, 00:18:14.824 "io_path_stat": false, 00:18:14.824 "allow_accel_sequence": false, 00:18:14.824 "rdma_max_cq_size": 0, 00:18:14.824 "rdma_cm_event_timeout_ms": 0, 00:18:14.824 "dhchap_digests": [ 00:18:14.824 "sha256", 00:18:14.824 "sha384", 00:18:14.824 "sha512" 00:18:14.824 ], 00:18:14.824 "dhchap_dhgroups": [ 00:18:14.824 "null", 00:18:14.824 "ffdhe2048", 00:18:14.824 "ffdhe3072", 00:18:14.824 "ffdhe4096", 00:18:14.824 "ffdhe6144", 00:18:14.824 "ffdhe8192" 00:18:14.824 ] 00:18:14.824 } 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "method": "bdev_nvme_set_hotplug", 00:18:14.825 "params": { 00:18:14.825 "period_us": 100000, 00:18:14.825 "enable": false 00:18:14.825 } 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "method": "bdev_wait_for_examine" 00:18:14.825 } 00:18:14.825 ] 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "subsystem": "scsi", 00:18:14.825 "config": null 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "subsystem": "scheduler", 00:18:14.825 "config": [ 00:18:14.825 { 00:18:14.825 "method": "framework_set_scheduler", 00:18:14.825 "params": { 00:18:14.825 "name": "static" 00:18:14.825 } 00:18:14.825 } 00:18:14.825 ] 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "subsystem": "vhost_scsi", 00:18:14.825 "config": [] 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "subsystem": "vhost_blk", 00:18:14.825 "config": [] 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "subsystem": "ublk", 00:18:14.825 "config": [] 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "subsystem": "nbd", 00:18:14.825 "config": [] 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "subsystem": "nvmf", 00:18:14.825 "config": [ 00:18:14.825 { 00:18:14.825 "method": "nvmf_set_config", 00:18:14.825 "params": { 00:18:14.825 "discovery_filter": "match_any", 00:18:14.825 "admin_cmd_passthru": { 00:18:14.825 "identify_ctrlr": false 00:18:14.825 }, 00:18:14.825 "dhchap_digests": [ 00:18:14.825 "sha256", 00:18:14.825 "sha384", 00:18:14.825 "sha512" 00:18:14.825 ], 00:18:14.825 "dhchap_dhgroups": [ 00:18:14.825 "null", 00:18:14.825 "ffdhe2048", 00:18:14.825 "ffdhe3072", 00:18:14.825 "ffdhe4096", 00:18:14.825 "ffdhe6144", 00:18:14.825 "ffdhe8192" 00:18:14.825 ] 00:18:14.825 } 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "method": "nvmf_set_max_subsystems", 00:18:14.825 "params": { 00:18:14.825 "max_subsystems": 1024 00:18:14.825 } 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "method": "nvmf_set_crdt", 00:18:14.825 "params": { 00:18:14.825 "crdt1": 0, 00:18:14.825 "crdt2": 0, 00:18:14.825 "crdt3": 0 00:18:14.825 } 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "method": "nvmf_create_transport", 00:18:14.825 "params": { 00:18:14.825 "trtype": "TCP", 00:18:14.825 "max_queue_depth": 128, 00:18:14.825 "max_io_qpairs_per_ctrlr": 127, 00:18:14.825 "in_capsule_data_size": 4096, 00:18:14.825 "max_io_size": 131072, 00:18:14.825 "io_unit_size": 131072, 00:18:14.825 "max_aq_depth": 128, 00:18:14.825 "num_shared_buffers": 511, 00:18:14.825 "buf_cache_size": 4294967295, 00:18:14.825 "dif_insert_or_strip": false, 00:18:14.825 "zcopy": false, 00:18:14.825 "c2h_success": true, 00:18:14.825 "sock_priority": 0, 00:18:14.825 "abort_timeout_sec": 1, 00:18:14.825 "ack_timeout": 0, 00:18:14.825 "data_wr_pool_size": 0 00:18:14.825 } 00:18:14.825 } 00:18:14.825 ] 00:18:14.825 }, 00:18:14.825 { 00:18:14.825 "subsystem": "iscsi", 00:18:14.825 "config": [ 00:18:14.825 { 00:18:14.825 "method": "iscsi_set_options", 00:18:14.825 "params": { 00:18:14.825 "node_base": "iqn.2016-06.io.spdk", 00:18:14.825 "max_sessions": 128, 00:18:14.825 "max_connections_per_session": 2, 00:18:14.825 "max_queue_depth": 64, 00:18:14.825 "default_time2wait": 2, 00:18:14.825 "default_time2retain": 20, 00:18:14.825 "first_burst_length": 8192, 00:18:14.825 "immediate_data": true, 00:18:14.825 "allow_duplicated_isid": false, 00:18:14.825 "error_recovery_level": 0, 00:18:14.825 "nop_timeout": 60, 00:18:14.825 "nop_in_interval": 30, 00:18:14.825 "disable_chap": false, 00:18:14.825 "require_chap": false, 00:18:14.825 "mutual_chap": false, 00:18:14.825 "chap_group": 0, 00:18:14.825 "max_large_datain_per_connection": 64, 00:18:14.825 "max_r2t_per_connection": 4, 00:18:14.825 "pdu_pool_size": 36864, 00:18:14.825 "immediate_data_pool_size": 16384, 00:18:14.825 "data_out_pool_size": 2048 00:18:14.825 } 00:18:14.825 } 00:18:14.825 ] 00:18:14.825 } 00:18:14.825 ] 00:18:14.825 } 00:18:14.825 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:14.825 10:29:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 467717 00:18:14.825 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 467717 ']' 00:18:14.825 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 467717 00:18:15.084 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:18:15.084 10:29:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.085 10:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 467717 00:18:15.085 10:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.085 10:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.085 10:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 467717' 00:18:15.085 killing process with pid 467717 00:18:15.085 10:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 467717 00:18:15.085 10:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 467717 00:18:15.344 10:29:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=467956 00:18:15.344 10:29:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:18:15.344 10:29:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 467956 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 467956 ']' 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 467956 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 467956 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 467956' 00:18:20.619 killing process with pid 467956 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 467956 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 467956 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:18:20.619 00:18:20.619 real 0m6.341s 00:18:20.619 user 0m6.051s 00:18:20.619 sys 0m0.579s 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.619 10:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:20.619 ************************************ 00:18:20.619 END TEST skip_rpc_with_json 00:18:20.619 ************************************ 00:18:20.877 10:29:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:18:20.877 10:29:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:20.877 10:29:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.877 10:29:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.877 ************************************ 00:18:20.877 START TEST skip_rpc_with_delay 00:18:20.877 ************************************ 00:18:20.877 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:18:20.877 10:29:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:20.877 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:20.878 [2024-12-09 10:29:21.893463] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.878 00:18:20.878 real 0m0.055s 00:18:20.878 user 0m0.032s 00:18:20.878 sys 0m0.022s 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.878 10:29:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:18:20.878 ************************************ 00:18:20.878 END TEST skip_rpc_with_delay 00:18:20.878 ************************************ 00:18:20.878 10:29:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:18:20.878 10:29:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:18:20.878 10:29:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:18:20.878 10:29:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:20.878 10:29:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.878 10:29:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.878 ************************************ 00:18:20.878 START TEST exit_on_failed_rpc_init 00:18:20.878 ************************************ 00:18:20.878 10:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:18:20.878 10:29:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:18:20.878 10:29:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=468949 00:18:20.878 10:29:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 468949 00:18:20.878 10:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 468949 ']' 00:18:20.878 10:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.878 10:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.878 10:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.878 10:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.878 10:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:18:20.878 [2024-12-09 10:29:22.016394] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:20.878 [2024-12-09 10:29:22.016435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468949 ] 00:18:21.136 [2024-12-09 10:29:22.081971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.136 [2024-12-09 10:29:22.125283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:18:21.395 [2024-12-09 10:29:22.376735] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:21.395 [2024-12-09 10:29:22.376780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468958 ] 00:18:21.395 [2024-12-09 10:29:22.439833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.395 [2024-12-09 10:29:22.480748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.395 [2024-12-09 10:29:22.480803] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:21.395 [2024-12-09 10:29:22.480812] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:21.395 [2024-12-09 10:29:22.480818] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 468949 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 468949 ']' 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 468949 00:18:21.395 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:18:21.654 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.654 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 468949 00:18:21.654 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.654 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.654 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 468949' 00:18:21.654 killing process with pid 468949 00:18:21.654 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 468949 00:18:21.654 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 468949 00:18:21.942 00:18:21.942 real 0m0.978s 00:18:21.942 user 0m1.086s 00:18:21.942 sys 0m0.361s 00:18:21.942 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.942 10:29:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:18:21.942 ************************************ 00:18:21.942 END TEST exit_on_failed_rpc_init 00:18:21.942 ************************************ 00:18:21.942 10:29:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:18:21.942 00:18:21.942 real 0m13.193s 00:18:21.942 user 0m12.537s 00:18:21.942 sys 0m1.460s 00:18:21.942 10:29:22 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.942 10:29:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.942 ************************************ 00:18:21.942 END TEST skip_rpc 00:18:21.942 ************************************ 00:18:21.942 10:29:23 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:18:21.942 10:29:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:21.942 10:29:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.942 10:29:23 -- common/autotest_common.sh@10 -- # set +x 00:18:21.942 ************************************ 00:18:21.942 START TEST rpc_client 00:18:21.942 ************************************ 00:18:21.942 10:29:23 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:18:22.199 * Looking for test storage... 00:18:22.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:18:22.199 10:29:23 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:22.199 10:29:23 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:18:22.199 10:29:23 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:22.199 10:29:23 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@345 -- # : 1 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@353 -- # local d=1 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.199 10:29:23 rpc_client -- scripts/common.sh@355 -- # echo 1 00:18:22.200 10:29:23 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.200 10:29:23 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:18:22.200 10:29:23 rpc_client -- scripts/common.sh@353 -- # local d=2 00:18:22.200 10:29:23 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.200 10:29:23 rpc_client -- scripts/common.sh@355 -- # echo 2 00:18:22.200 10:29:23 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.200 10:29:23 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.200 10:29:23 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.200 10:29:23 rpc_client -- scripts/common.sh@368 -- # return 0 00:18:22.200 10:29:23 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.200 10:29:23 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:22.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.200 --rc genhtml_branch_coverage=1 00:18:22.200 --rc genhtml_function_coverage=1 00:18:22.200 --rc genhtml_legend=1 00:18:22.200 --rc geninfo_all_blocks=1 00:18:22.200 --rc geninfo_unexecuted_blocks=1 00:18:22.200 00:18:22.200 ' 00:18:22.200 10:29:23 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:22.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.200 --rc genhtml_branch_coverage=1 00:18:22.200 --rc genhtml_function_coverage=1 00:18:22.200 --rc genhtml_legend=1 00:18:22.200 --rc geninfo_all_blocks=1 00:18:22.200 --rc geninfo_unexecuted_blocks=1 00:18:22.200 00:18:22.200 ' 00:18:22.200 10:29:23 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:22.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.200 --rc genhtml_branch_coverage=1 00:18:22.200 --rc genhtml_function_coverage=1 00:18:22.200 --rc genhtml_legend=1 00:18:22.200 --rc geninfo_all_blocks=1 00:18:22.200 --rc geninfo_unexecuted_blocks=1 00:18:22.200 00:18:22.200 ' 00:18:22.200 10:29:23 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:22.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.200 --rc genhtml_branch_coverage=1 00:18:22.200 --rc genhtml_function_coverage=1 00:18:22.200 --rc genhtml_legend=1 00:18:22.200 --rc geninfo_all_blocks=1 00:18:22.200 --rc geninfo_unexecuted_blocks=1 00:18:22.200 00:18:22.200 ' 00:18:22.200 10:29:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:18:22.200 OK 00:18:22.200 10:29:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:18:22.200 00:18:22.200 real 0m0.177s 00:18:22.200 user 0m0.096s 00:18:22.200 sys 0m0.093s 00:18:22.200 10:29:23 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.200 10:29:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:18:22.200 ************************************ 00:18:22.200 END TEST rpc_client 00:18:22.200 ************************************ 00:18:22.200 10:29:23 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:18:22.200 10:29:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:22.200 10:29:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.200 10:29:23 -- common/autotest_common.sh@10 -- # set +x 00:18:22.200 ************************************ 00:18:22.200 START TEST json_config 00:18:22.200 ************************************ 00:18:22.200 10:29:23 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:18:22.200 10:29:23 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:22.200 10:29:23 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:18:22.200 10:29:23 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:22.458 10:29:23 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:22.458 10:29:23 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.458 10:29:23 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.458 10:29:23 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.458 10:29:23 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.458 10:29:23 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.458 10:29:23 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.458 10:29:23 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.458 10:29:23 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.458 10:29:23 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.458 10:29:23 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.458 10:29:23 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.458 10:29:23 json_config -- scripts/common.sh@344 -- # case "$op" in 00:18:22.458 10:29:23 json_config -- scripts/common.sh@345 -- # : 1 00:18:22.458 10:29:23 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.458 10:29:23 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.458 10:29:23 json_config -- scripts/common.sh@365 -- # decimal 1 00:18:22.458 10:29:23 json_config -- scripts/common.sh@353 -- # local d=1 00:18:22.458 10:29:23 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.458 10:29:23 json_config -- scripts/common.sh@355 -- # echo 1 00:18:22.458 10:29:23 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.458 10:29:23 json_config -- scripts/common.sh@366 -- # decimal 2 00:18:22.458 10:29:23 json_config -- scripts/common.sh@353 -- # local d=2 00:18:22.458 10:29:23 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.459 10:29:23 json_config -- scripts/common.sh@355 -- # echo 2 00:18:22.459 10:29:23 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.459 10:29:23 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.459 10:29:23 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.459 10:29:23 json_config -- scripts/common.sh@368 -- # return 0 00:18:22.459 10:29:23 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.459 10:29:23 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:22.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.459 --rc genhtml_branch_coverage=1 00:18:22.459 --rc genhtml_function_coverage=1 00:18:22.459 --rc genhtml_legend=1 00:18:22.459 --rc geninfo_all_blocks=1 00:18:22.459 --rc geninfo_unexecuted_blocks=1 00:18:22.459 00:18:22.459 ' 00:18:22.459 10:29:23 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:22.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.459 --rc genhtml_branch_coverage=1 00:18:22.459 --rc genhtml_function_coverage=1 00:18:22.459 --rc genhtml_legend=1 00:18:22.459 --rc geninfo_all_blocks=1 00:18:22.459 --rc geninfo_unexecuted_blocks=1 00:18:22.459 00:18:22.459 ' 00:18:22.459 10:29:23 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:22.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.459 --rc genhtml_branch_coverage=1 00:18:22.459 --rc genhtml_function_coverage=1 00:18:22.459 --rc genhtml_legend=1 00:18:22.459 --rc geninfo_all_blocks=1 00:18:22.459 --rc geninfo_unexecuted_blocks=1 00:18:22.459 00:18:22.459 ' 00:18:22.459 10:29:23 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:22.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.459 --rc genhtml_branch_coverage=1 00:18:22.459 --rc genhtml_function_coverage=1 00:18:22.459 --rc genhtml_legend=1 00:18:22.459 --rc geninfo_all_blocks=1 00:18:22.459 --rc geninfo_unexecuted_blocks=1 00:18:22.459 00:18:22.459 ' 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.459 10:29:23 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.459 10:29:23 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.459 10:29:23 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.459 10:29:23 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.459 10:29:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.459 10:29:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.459 10:29:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.459 10:29:23 json_config -- paths/export.sh@5 -- # export PATH 00:18:22.459 10:29:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@51 -- # : 0 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.459 10:29:23 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:18:22.459 10:29:23 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:18:22.460 10:29:23 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:18:22.460 INFO: JSON configuration test init 00:18:22.460 10:29:23 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:18:22.460 10:29:23 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:18:22.460 10:29:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.460 10:29:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:22.460 10:29:23 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:18:22.460 10:29:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.460 10:29:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:22.460 10:29:23 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:18:22.460 10:29:23 json_config -- json_config/common.sh@9 -- # local app=target 00:18:22.460 10:29:23 json_config -- json_config/common.sh@10 -- # shift 00:18:22.460 10:29:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:18:22.460 10:29:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:18:22.460 10:29:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:18:22.460 10:29:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:22.460 10:29:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:22.460 10:29:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=469310 00:18:22.460 10:29:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:18:22.460 Waiting for target to run... 00:18:22.460 10:29:23 json_config -- json_config/common.sh@25 -- # waitforlisten 469310 /var/tmp/spdk_tgt.sock 00:18:22.460 10:29:23 json_config -- common/autotest_common.sh@835 -- # '[' -z 469310 ']' 00:18:22.460 10:29:23 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:18:22.460 10:29:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:18:22.460 10:29:23 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.460 10:29:23 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:18:22.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:18:22.460 10:29:23 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.460 10:29:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:22.460 [2024-12-09 10:29:23.544119] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:22.460 [2024-12-09 10:29:23.544167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469310 ] 00:18:22.718 [2024-12-09 10:29:23.815946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.718 [2024-12-09 10:29:23.849635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.283 10:29:24 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.283 10:29:24 json_config -- common/autotest_common.sh@868 -- # return 0 00:18:23.283 10:29:24 json_config -- json_config/common.sh@26 -- # echo '' 00:18:23.283 00:18:23.283 10:29:24 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:18:23.283 10:29:24 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:18:23.283 10:29:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.283 10:29:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:23.283 10:29:24 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:18:23.283 10:29:24 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:18:23.283 10:29:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.283 10:29:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:23.283 10:29:24 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:18:23.283 10:29:24 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:18:23.283 10:29:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:18:26.569 10:29:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.569 10:29:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:18:26.569 10:29:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@51 -- # local get_types 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@54 -- # sort 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:18:26.569 10:29:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.569 10:29:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@62 -- # return 0 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:18:26.569 10:29:27 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:18:26.569 10:29:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.569 10:29:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:26.829 10:29:27 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:18:26.829 10:29:27 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:18:26.829 10:29:27 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:18:26.829 10:29:27 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:18:26.829 10:29:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:18:26.829 MallocForNvmf0 00:18:26.829 10:29:27 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:18:26.829 10:29:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:18:27.088 MallocForNvmf1 00:18:27.088 10:29:28 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:18:27.088 10:29:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:18:27.347 [2024-12-09 10:29:28.298305] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.347 10:29:28 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:27.347 10:29:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:27.347 10:29:28 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:18:27.347 10:29:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:18:27.606 10:29:28 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:18:27.606 10:29:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:18:27.865 10:29:28 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:18:27.865 10:29:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:18:28.124 [2024-12-09 10:29:29.052652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:18:28.124 10:29:29 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:18:28.124 10:29:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.124 10:29:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:28.124 10:29:29 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:18:28.124 10:29:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.124 10:29:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:28.124 10:29:29 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:18:28.124 10:29:29 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:18:28.124 10:29:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:18:28.383 MallocBdevForConfigChangeCheck 00:18:28.383 10:29:29 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:18:28.383 10:29:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.383 10:29:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:28.383 10:29:29 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:18:28.383 10:29:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:18:28.641 10:29:29 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:18:28.641 INFO: shutting down applications... 00:18:28.641 10:29:29 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:18:28.641 10:29:29 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:18:28.641 10:29:29 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:18:28.641 10:29:29 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:18:30.547 Calling clear_iscsi_subsystem 00:18:30.547 Calling clear_nvmf_subsystem 00:18:30.547 Calling clear_nbd_subsystem 00:18:30.547 Calling clear_ublk_subsystem 00:18:30.547 Calling clear_vhost_blk_subsystem 00:18:30.547 Calling clear_vhost_scsi_subsystem 00:18:30.547 Calling clear_bdev_subsystem 00:18:30.547 10:29:31 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:18:30.547 10:29:31 json_config -- json_config/json_config.sh@350 -- # count=100 00:18:30.547 10:29:31 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:18:30.547 10:29:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:18:30.547 10:29:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:18:30.547 10:29:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:18:30.547 10:29:31 json_config -- json_config/json_config.sh@352 -- # break 00:18:30.547 10:29:31 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:18:30.547 10:29:31 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:18:30.547 10:29:31 json_config -- json_config/common.sh@31 -- # local app=target 00:18:30.547 10:29:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:18:30.547 10:29:31 json_config -- json_config/common.sh@35 -- # [[ -n 469310 ]] 00:18:30.547 10:29:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 469310 00:18:30.547 10:29:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:18:30.547 10:29:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:30.547 10:29:31 json_config -- json_config/common.sh@41 -- # kill -0 469310 00:18:30.547 10:29:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:18:31.114 10:29:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:18:31.114 10:29:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:31.114 10:29:32 json_config -- json_config/common.sh@41 -- # kill -0 469310 00:18:31.114 10:29:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:18:31.114 10:29:32 json_config -- json_config/common.sh@43 -- # break 00:18:31.114 10:29:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:18:31.114 10:29:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:18:31.115 SPDK target shutdown done 00:18:31.115 10:29:32 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:18:31.115 INFO: relaunching applications... 00:18:31.115 10:29:32 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:18:31.115 10:29:32 json_config -- json_config/common.sh@9 -- # local app=target 00:18:31.115 10:29:32 json_config -- json_config/common.sh@10 -- # shift 00:18:31.115 10:29:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:18:31.115 10:29:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:18:31.115 10:29:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:18:31.115 10:29:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:31.115 10:29:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:31.115 10:29:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=470833 00:18:31.115 10:29:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:18:31.115 Waiting for target to run... 00:18:31.115 10:29:32 json_config -- json_config/common.sh@25 -- # waitforlisten 470833 /var/tmp/spdk_tgt.sock 00:18:31.115 10:29:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:18:31.115 10:29:32 json_config -- common/autotest_common.sh@835 -- # '[' -z 470833 ']' 00:18:31.115 10:29:32 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:18:31.115 10:29:32 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.115 10:29:32 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:18:31.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:18:31.115 10:29:32 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.115 10:29:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:31.115 [2024-12-09 10:29:32.143684] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:31.115 [2024-12-09 10:29:32.143743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470833 ] 00:18:31.682 [2024-12-09 10:29:32.597757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.682 [2024-12-09 10:29:32.654585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.971 [2024-12-09 10:29:35.683778] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.972 [2024-12-09 10:29:35.716135] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:18:35.229 10:29:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.229 10:29:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:18:35.229 10:29:36 json_config -- json_config/common.sh@26 -- # echo '' 00:18:35.229 00:18:35.229 10:29:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:18:35.229 10:29:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:18:35.229 INFO: Checking if target configuration is the same... 00:18:35.229 10:29:36 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:18:35.229 10:29:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:18:35.230 10:29:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:18:35.230 + '[' 2 -ne 2 ']' 00:18:35.230 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:18:35.230 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:18:35.230 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:18:35.230 +++ basename /dev/fd/62 00:18:35.230 ++ mktemp /tmp/62.XXX 00:18:35.230 + tmp_file_1=/tmp/62.IaE 00:18:35.230 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:18:35.230 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:18:35.230 + tmp_file_2=/tmp/spdk_tgt_config.json.A2l 00:18:35.230 + ret=0 00:18:35.230 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:18:35.797 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:18:35.797 + diff -u /tmp/62.IaE /tmp/spdk_tgt_config.json.A2l 00:18:35.797 + echo 'INFO: JSON config files are the same' 00:18:35.797 INFO: JSON config files are the same 00:18:35.797 + rm /tmp/62.IaE /tmp/spdk_tgt_config.json.A2l 00:18:35.797 + exit 0 00:18:35.797 10:29:36 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:18:35.797 10:29:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:18:35.797 INFO: changing configuration and checking if this can be detected... 00:18:35.797 10:29:36 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:18:35.797 10:29:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:18:35.797 10:29:36 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:18:35.797 10:29:36 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:18:35.797 10:29:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:18:35.797 + '[' 2 -ne 2 ']' 00:18:35.797 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:18:35.797 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:18:35.797 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:18:35.797 +++ basename /dev/fd/62 00:18:35.797 ++ mktemp /tmp/62.XXX 00:18:35.797 + tmp_file_1=/tmp/62.LDI 00:18:35.797 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:18:35.797 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:18:35.797 + tmp_file_2=/tmp/spdk_tgt_config.json.9Ly 00:18:35.797 + ret=0 00:18:35.797 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:18:36.364 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:18:36.364 + diff -u /tmp/62.LDI /tmp/spdk_tgt_config.json.9Ly 00:18:36.364 + ret=1 00:18:36.364 + echo '=== Start of file: /tmp/62.LDI ===' 00:18:36.364 + cat /tmp/62.LDI 00:18:36.364 + echo '=== End of file: /tmp/62.LDI ===' 00:18:36.364 + echo '' 00:18:36.364 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9Ly ===' 00:18:36.364 + cat /tmp/spdk_tgt_config.json.9Ly 00:18:36.364 + echo '=== End of file: /tmp/spdk_tgt_config.json.9Ly ===' 00:18:36.364 + echo '' 00:18:36.364 + rm /tmp/62.LDI /tmp/spdk_tgt_config.json.9Ly 00:18:36.364 + exit 1 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:18:36.364 INFO: configuration change detected. 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:18:36.364 10:29:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.364 10:29:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 470833 ]] 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:18:36.364 10:29:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.364 10:29:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@200 -- # uname -s 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:18:36.364 10:29:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:18:36.365 10:29:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:18:36.365 10:29:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:36.365 10:29:37 json_config -- json_config/json_config.sh@330 -- # killprocess 470833 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@954 -- # '[' -z 470833 ']' 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@958 -- # kill -0 470833 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@959 -- # uname 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470833 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470833' 00:18:36.365 killing process with pid 470833 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@973 -- # kill 470833 00:18:36.365 10:29:37 json_config -- common/autotest_common.sh@978 -- # wait 470833 00:18:38.268 10:29:38 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:18:38.268 10:29:38 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:18:38.268 10:29:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.268 10:29:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 10:29:39 json_config -- json_config/json_config.sh@335 -- # return 0 00:18:38.268 10:29:39 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:18:38.268 INFO: Success 00:18:38.268 00:18:38.268 real 0m15.705s 00:18:38.268 user 0m16.217s 00:18:38.268 sys 0m2.561s 00:18:38.268 10:29:39 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.268 10:29:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 ************************************ 00:18:38.268 END TEST json_config 00:18:38.268 ************************************ 00:18:38.268 10:29:39 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:18:38.268 10:29:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:38.268 10:29:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.268 10:29:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 ************************************ 00:18:38.268 START TEST json_config_extra_key 00:18:38.268 ************************************ 00:18:38.268 10:29:39 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:18:38.268 10:29:39 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:38.268 10:29:39 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:18:38.268 10:29:39 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:38.268 10:29:39 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:18:38.268 10:29:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.269 10:29:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:18:38.269 10:29:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.269 10:29:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.269 10:29:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.269 10:29:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:38.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.269 --rc genhtml_branch_coverage=1 00:18:38.269 --rc genhtml_function_coverage=1 00:18:38.269 --rc genhtml_legend=1 00:18:38.269 --rc geninfo_all_blocks=1 00:18:38.269 --rc geninfo_unexecuted_blocks=1 00:18:38.269 00:18:38.269 ' 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:38.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.269 --rc genhtml_branch_coverage=1 00:18:38.269 --rc genhtml_function_coverage=1 00:18:38.269 --rc genhtml_legend=1 00:18:38.269 --rc geninfo_all_blocks=1 00:18:38.269 --rc geninfo_unexecuted_blocks=1 00:18:38.269 00:18:38.269 ' 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:38.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.269 --rc genhtml_branch_coverage=1 00:18:38.269 --rc genhtml_function_coverage=1 00:18:38.269 --rc genhtml_legend=1 00:18:38.269 --rc geninfo_all_blocks=1 00:18:38.269 --rc geninfo_unexecuted_blocks=1 00:18:38.269 00:18:38.269 ' 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:38.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.269 --rc genhtml_branch_coverage=1 00:18:38.269 --rc genhtml_function_coverage=1 00:18:38.269 --rc genhtml_legend=1 00:18:38.269 --rc geninfo_all_blocks=1 00:18:38.269 --rc geninfo_unexecuted_blocks=1 00:18:38.269 00:18:38.269 ' 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.269 10:29:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:18:38.269 10:29:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.269 10:29:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.269 10:29:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.269 10:29:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.269 10:29:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.269 10:29:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.269 10:29:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:18:38.269 10:29:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:38.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:38.269 10:29:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:18:38.269 INFO: launching applications... 00:18:38.269 10:29:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=472262 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:18:38.269 Waiting for target to run... 00:18:38.269 10:29:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 472262 /var/tmp/spdk_tgt.sock 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 472262 ']' 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:18:38.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.269 10:29:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:18:38.269 [2024-12-09 10:29:39.285210] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:38.269 [2024-12-09 10:29:39.285259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472262 ] 00:18:38.527 [2024-12-09 10:29:39.556587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.527 [2024-12-09 10:29:39.590123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.092 10:29:40 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.092 10:29:40 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:18:39.092 10:29:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:18:39.092 00:18:39.092 10:29:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:18:39.092 INFO: shutting down applications... 00:18:39.092 10:29:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:18:39.092 10:29:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:18:39.092 10:29:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:18:39.092 10:29:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 472262 ]] 00:18:39.092 10:29:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 472262 00:18:39.092 10:29:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:18:39.092 10:29:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:39.093 10:29:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 472262 00:18:39.093 10:29:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:39.658 10:29:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:39.658 10:29:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:39.658 10:29:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 472262 00:18:39.658 10:29:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:18:39.658 10:29:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:18:39.658 10:29:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:18:39.658 10:29:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:18:39.658 SPDK target shutdown done 00:18:39.658 10:29:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:18:39.658 Success 00:18:39.658 00:18:39.658 real 0m1.582s 00:18:39.658 user 0m1.464s 00:18:39.658 sys 0m0.369s 00:18:39.658 10:29:40 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.658 10:29:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:18:39.658 ************************************ 00:18:39.658 END TEST json_config_extra_key 00:18:39.658 ************************************ 00:18:39.658 10:29:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:18:39.658 10:29:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:39.658 10:29:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.658 10:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:39.658 ************************************ 00:18:39.658 START TEST alias_rpc 00:18:39.658 ************************************ 00:18:39.658 10:29:40 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:18:39.658 * Looking for test storage... 00:18:39.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:18:39.658 10:29:40 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:39.658 10:29:40 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:18:39.658 10:29:40 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:39.917 10:29:40 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.917 10:29:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:18:39.917 10:29:40 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.917 10:29:40 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.917 --rc genhtml_branch_coverage=1 00:18:39.917 --rc genhtml_function_coverage=1 00:18:39.917 --rc genhtml_legend=1 00:18:39.917 --rc geninfo_all_blocks=1 00:18:39.917 --rc geninfo_unexecuted_blocks=1 00:18:39.917 00:18:39.917 ' 00:18:39.917 10:29:40 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.917 --rc genhtml_branch_coverage=1 00:18:39.917 --rc genhtml_function_coverage=1 00:18:39.917 --rc genhtml_legend=1 00:18:39.917 --rc geninfo_all_blocks=1 00:18:39.917 --rc geninfo_unexecuted_blocks=1 00:18:39.917 00:18:39.917 ' 00:18:39.917 10:29:40 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.917 --rc genhtml_branch_coverage=1 00:18:39.917 --rc genhtml_function_coverage=1 00:18:39.917 --rc genhtml_legend=1 00:18:39.917 --rc geninfo_all_blocks=1 00:18:39.917 --rc geninfo_unexecuted_blocks=1 00:18:39.917 00:18:39.917 ' 00:18:39.917 10:29:40 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.917 --rc genhtml_branch_coverage=1 00:18:39.917 --rc genhtml_function_coverage=1 00:18:39.917 --rc genhtml_legend=1 00:18:39.917 --rc geninfo_all_blocks=1 00:18:39.917 --rc geninfo_unexecuted_blocks=1 00:18:39.917 00:18:39.917 ' 00:18:39.917 10:29:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:18:39.917 10:29:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=472612 00:18:39.917 10:29:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 472612 00:18:39.917 10:29:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:18:39.917 10:29:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 472612 ']' 00:18:39.918 10:29:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.918 10:29:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.918 10:29:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.918 10:29:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.918 10:29:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.918 [2024-12-09 10:29:40.941239] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:39.918 [2024-12-09 10:29:40.941290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472612 ] 00:18:39.918 [2024-12-09 10:29:41.006252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.918 [2024-12-09 10:29:41.048627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.175 10:29:41 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.175 10:29:41 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:40.175 10:29:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:18:40.433 10:29:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 472612 00:18:40.433 10:29:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 472612 ']' 00:18:40.433 10:29:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 472612 00:18:40.433 10:29:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:18:40.433 10:29:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.433 10:29:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 472612 00:18:40.433 10:29:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:40.433 10:29:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:40.433 10:29:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 472612' 00:18:40.433 killing process with pid 472612 00:18:40.433 10:29:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 472612 00:18:40.433 10:29:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 472612 00:18:40.999 00:18:40.999 real 0m1.159s 00:18:40.999 user 0m1.204s 00:18:40.999 sys 0m0.395s 00:18:40.999 10:29:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.999 10:29:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:40.999 ************************************ 00:18:40.999 END TEST alias_rpc 00:18:40.999 ************************************ 00:18:40.999 10:29:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:18:40.999 10:29:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:18:40.999 10:29:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:40.999 10:29:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.999 10:29:41 -- common/autotest_common.sh@10 -- # set +x 00:18:40.999 ************************************ 00:18:40.999 START TEST spdkcli_tcp 00:18:40.999 ************************************ 00:18:40.999 10:29:41 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:18:40.999 * Looking for test storage... 00:18:40.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:18:40.999 10:29:42 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:40.999 10:29:42 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:18:40.999 10:29:42 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:40.999 10:29:42 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.999 10:29:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:18:40.999 10:29:42 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.999 10:29:42 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:40.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.999 --rc genhtml_branch_coverage=1 00:18:40.999 --rc genhtml_function_coverage=1 00:18:40.999 --rc genhtml_legend=1 00:18:40.999 --rc geninfo_all_blocks=1 00:18:40.999 --rc geninfo_unexecuted_blocks=1 00:18:40.999 00:18:41.000 ' 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:41.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.000 --rc genhtml_branch_coverage=1 00:18:41.000 --rc genhtml_function_coverage=1 00:18:41.000 --rc genhtml_legend=1 00:18:41.000 --rc geninfo_all_blocks=1 00:18:41.000 --rc geninfo_unexecuted_blocks=1 00:18:41.000 00:18:41.000 ' 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:41.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.000 --rc genhtml_branch_coverage=1 00:18:41.000 --rc genhtml_function_coverage=1 00:18:41.000 --rc genhtml_legend=1 00:18:41.000 --rc geninfo_all_blocks=1 00:18:41.000 --rc geninfo_unexecuted_blocks=1 00:18:41.000 00:18:41.000 ' 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:41.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.000 --rc genhtml_branch_coverage=1 00:18:41.000 --rc genhtml_function_coverage=1 00:18:41.000 --rc genhtml_legend=1 00:18:41.000 --rc geninfo_all_blocks=1 00:18:41.000 --rc geninfo_unexecuted_blocks=1 00:18:41.000 00:18:41.000 ' 00:18:41.000 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:18:41.000 10:29:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:18:41.000 10:29:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:18:41.000 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:18:41.000 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:18:41.000 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:41.000 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:41.000 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=472900 00:18:41.000 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 472900 00:18:41.000 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 472900 ']' 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.000 10:29:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:41.000 [2024-12-09 10:29:42.161523] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:41.000 [2024-12-09 10:29:42.161567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472900 ] 00:18:41.258 [2024-12-09 10:29:42.226085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:41.258 [2024-12-09 10:29:42.269919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.258 [2024-12-09 10:29:42.269923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.518 10:29:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.518 10:29:42 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:18:41.518 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=472910 00:18:41.518 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:18:41.518 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:18:41.518 [ 00:18:41.518 "bdev_malloc_delete", 00:18:41.518 "bdev_malloc_create", 00:18:41.518 "bdev_null_resize", 00:18:41.518 "bdev_null_delete", 00:18:41.518 "bdev_null_create", 00:18:41.518 "bdev_nvme_cuse_unregister", 00:18:41.518 "bdev_nvme_cuse_register", 00:18:41.518 "bdev_opal_new_user", 00:18:41.518 "bdev_opal_set_lock_state", 00:18:41.518 "bdev_opal_delete", 00:18:41.518 "bdev_opal_get_info", 00:18:41.518 "bdev_opal_create", 00:18:41.518 "bdev_nvme_opal_revert", 00:18:41.518 "bdev_nvme_opal_init", 00:18:41.518 "bdev_nvme_send_cmd", 00:18:41.518 "bdev_nvme_set_keys", 00:18:41.518 "bdev_nvme_get_path_iostat", 00:18:41.518 "bdev_nvme_get_mdns_discovery_info", 00:18:41.518 "bdev_nvme_stop_mdns_discovery", 00:18:41.518 "bdev_nvme_start_mdns_discovery", 00:18:41.518 "bdev_nvme_set_multipath_policy", 00:18:41.518 "bdev_nvme_set_preferred_path", 00:18:41.518 "bdev_nvme_get_io_paths", 00:18:41.518 "bdev_nvme_remove_error_injection", 00:18:41.518 "bdev_nvme_add_error_injection", 00:18:41.518 "bdev_nvme_get_discovery_info", 00:18:41.518 "bdev_nvme_stop_discovery", 00:18:41.518 "bdev_nvme_start_discovery", 00:18:41.518 "bdev_nvme_get_controller_health_info", 00:18:41.518 "bdev_nvme_disable_controller", 00:18:41.518 "bdev_nvme_enable_controller", 00:18:41.518 "bdev_nvme_reset_controller", 00:18:41.518 "bdev_nvme_get_transport_statistics", 00:18:41.518 "bdev_nvme_apply_firmware", 00:18:41.518 "bdev_nvme_detach_controller", 00:18:41.518 "bdev_nvme_get_controllers", 00:18:41.518 "bdev_nvme_attach_controller", 00:18:41.518 "bdev_nvme_set_hotplug", 00:18:41.518 "bdev_nvme_set_options", 00:18:41.518 "bdev_passthru_delete", 00:18:41.518 "bdev_passthru_create", 00:18:41.518 "bdev_lvol_set_parent_bdev", 00:18:41.518 "bdev_lvol_set_parent", 00:18:41.518 "bdev_lvol_check_shallow_copy", 00:18:41.518 "bdev_lvol_start_shallow_copy", 00:18:41.518 "bdev_lvol_grow_lvstore", 00:18:41.518 "bdev_lvol_get_lvols", 00:18:41.518 "bdev_lvol_get_lvstores", 00:18:41.518 "bdev_lvol_delete", 00:18:41.518 "bdev_lvol_set_read_only", 00:18:41.518 "bdev_lvol_resize", 00:18:41.518 "bdev_lvol_decouple_parent", 00:18:41.518 "bdev_lvol_inflate", 00:18:41.518 "bdev_lvol_rename", 00:18:41.518 "bdev_lvol_clone_bdev", 00:18:41.518 "bdev_lvol_clone", 00:18:41.518 "bdev_lvol_snapshot", 00:18:41.518 "bdev_lvol_create", 00:18:41.518 "bdev_lvol_delete_lvstore", 00:18:41.518 "bdev_lvol_rename_lvstore", 00:18:41.518 "bdev_lvol_create_lvstore", 00:18:41.518 "bdev_raid_set_options", 00:18:41.518 "bdev_raid_remove_base_bdev", 00:18:41.518 "bdev_raid_add_base_bdev", 00:18:41.518 "bdev_raid_delete", 00:18:41.518 "bdev_raid_create", 00:18:41.518 "bdev_raid_get_bdevs", 00:18:41.518 "bdev_error_inject_error", 00:18:41.518 "bdev_error_delete", 00:18:41.518 "bdev_error_create", 00:18:41.518 "bdev_split_delete", 00:18:41.518 "bdev_split_create", 00:18:41.518 "bdev_delay_delete", 00:18:41.518 "bdev_delay_create", 00:18:41.518 "bdev_delay_update_latency", 00:18:41.518 "bdev_zone_block_delete", 00:18:41.518 "bdev_zone_block_create", 00:18:41.518 "blobfs_create", 00:18:41.518 "blobfs_detect", 00:18:41.518 "blobfs_set_cache_size", 00:18:41.518 "bdev_aio_delete", 00:18:41.518 "bdev_aio_rescan", 00:18:41.518 "bdev_aio_create", 00:18:41.518 "bdev_ftl_set_property", 00:18:41.518 "bdev_ftl_get_properties", 00:18:41.518 "bdev_ftl_get_stats", 00:18:41.518 "bdev_ftl_unmap", 00:18:41.518 "bdev_ftl_unload", 00:18:41.518 "bdev_ftl_delete", 00:18:41.518 "bdev_ftl_load", 00:18:41.518 "bdev_ftl_create", 00:18:41.518 "bdev_virtio_attach_controller", 00:18:41.518 "bdev_virtio_scsi_get_devices", 00:18:41.518 "bdev_virtio_detach_controller", 00:18:41.518 "bdev_virtio_blk_set_hotplug", 00:18:41.518 "bdev_iscsi_delete", 00:18:41.518 "bdev_iscsi_create", 00:18:41.518 "bdev_iscsi_set_options", 00:18:41.518 "accel_error_inject_error", 00:18:41.518 "ioat_scan_accel_module", 00:18:41.518 "dsa_scan_accel_module", 00:18:41.518 "iaa_scan_accel_module", 00:18:41.518 "vfu_virtio_create_fs_endpoint", 00:18:41.518 "vfu_virtio_create_scsi_endpoint", 00:18:41.518 "vfu_virtio_scsi_remove_target", 00:18:41.518 "vfu_virtio_scsi_add_target", 00:18:41.518 "vfu_virtio_create_blk_endpoint", 00:18:41.518 "vfu_virtio_delete_endpoint", 00:18:41.518 "keyring_file_remove_key", 00:18:41.518 "keyring_file_add_key", 00:18:41.518 "keyring_linux_set_options", 00:18:41.518 "fsdev_aio_delete", 00:18:41.518 "fsdev_aio_create", 00:18:41.518 "iscsi_get_histogram", 00:18:41.518 "iscsi_enable_histogram", 00:18:41.518 "iscsi_set_options", 00:18:41.518 "iscsi_get_auth_groups", 00:18:41.518 "iscsi_auth_group_remove_secret", 00:18:41.518 "iscsi_auth_group_add_secret", 00:18:41.518 "iscsi_delete_auth_group", 00:18:41.518 "iscsi_create_auth_group", 00:18:41.518 "iscsi_set_discovery_auth", 00:18:41.518 "iscsi_get_options", 00:18:41.518 "iscsi_target_node_request_logout", 00:18:41.518 "iscsi_target_node_set_redirect", 00:18:41.518 "iscsi_target_node_set_auth", 00:18:41.518 "iscsi_target_node_add_lun", 00:18:41.518 "iscsi_get_stats", 00:18:41.518 "iscsi_get_connections", 00:18:41.518 "iscsi_portal_group_set_auth", 00:18:41.518 "iscsi_start_portal_group", 00:18:41.518 "iscsi_delete_portal_group", 00:18:41.518 "iscsi_create_portal_group", 00:18:41.518 "iscsi_get_portal_groups", 00:18:41.518 "iscsi_delete_target_node", 00:18:41.518 "iscsi_target_node_remove_pg_ig_maps", 00:18:41.518 "iscsi_target_node_add_pg_ig_maps", 00:18:41.518 "iscsi_create_target_node", 00:18:41.518 "iscsi_get_target_nodes", 00:18:41.518 "iscsi_delete_initiator_group", 00:18:41.518 "iscsi_initiator_group_remove_initiators", 00:18:41.518 "iscsi_initiator_group_add_initiators", 00:18:41.518 "iscsi_create_initiator_group", 00:18:41.518 "iscsi_get_initiator_groups", 00:18:41.518 "nvmf_set_crdt", 00:18:41.518 "nvmf_set_config", 00:18:41.518 "nvmf_set_max_subsystems", 00:18:41.518 "nvmf_stop_mdns_prr", 00:18:41.518 "nvmf_publish_mdns_prr", 00:18:41.518 "nvmf_subsystem_get_listeners", 00:18:41.518 "nvmf_subsystem_get_qpairs", 00:18:41.518 "nvmf_subsystem_get_controllers", 00:18:41.518 "nvmf_get_stats", 00:18:41.518 "nvmf_get_transports", 00:18:41.518 "nvmf_create_transport", 00:18:41.518 "nvmf_get_targets", 00:18:41.518 "nvmf_delete_target", 00:18:41.518 "nvmf_create_target", 00:18:41.518 "nvmf_subsystem_allow_any_host", 00:18:41.518 "nvmf_subsystem_set_keys", 00:18:41.518 "nvmf_subsystem_remove_host", 00:18:41.519 "nvmf_subsystem_add_host", 00:18:41.519 "nvmf_ns_remove_host", 00:18:41.519 "nvmf_ns_add_host", 00:18:41.519 "nvmf_subsystem_remove_ns", 00:18:41.519 "nvmf_subsystem_set_ns_ana_group", 00:18:41.519 "nvmf_subsystem_add_ns", 00:18:41.519 "nvmf_subsystem_listener_set_ana_state", 00:18:41.519 "nvmf_discovery_get_referrals", 00:18:41.519 "nvmf_discovery_remove_referral", 00:18:41.519 "nvmf_discovery_add_referral", 00:18:41.519 "nvmf_subsystem_remove_listener", 00:18:41.519 "nvmf_subsystem_add_listener", 00:18:41.519 "nvmf_delete_subsystem", 00:18:41.519 "nvmf_create_subsystem", 00:18:41.519 "nvmf_get_subsystems", 00:18:41.519 "env_dpdk_get_mem_stats", 00:18:41.519 "nbd_get_disks", 00:18:41.519 "nbd_stop_disk", 00:18:41.519 "nbd_start_disk", 00:18:41.519 "ublk_recover_disk", 00:18:41.519 "ublk_get_disks", 00:18:41.519 "ublk_stop_disk", 00:18:41.519 "ublk_start_disk", 00:18:41.519 "ublk_destroy_target", 00:18:41.519 "ublk_create_target", 00:18:41.519 "virtio_blk_create_transport", 00:18:41.519 "virtio_blk_get_transports", 00:18:41.519 "vhost_controller_set_coalescing", 00:18:41.519 "vhost_get_controllers", 00:18:41.519 "vhost_delete_controller", 00:18:41.519 "vhost_create_blk_controller", 00:18:41.519 "vhost_scsi_controller_remove_target", 00:18:41.519 "vhost_scsi_controller_add_target", 00:18:41.519 "vhost_start_scsi_controller", 00:18:41.519 "vhost_create_scsi_controller", 00:18:41.519 "thread_set_cpumask", 00:18:41.519 "scheduler_set_options", 00:18:41.519 "framework_get_governor", 00:18:41.519 "framework_get_scheduler", 00:18:41.519 "framework_set_scheduler", 00:18:41.519 "framework_get_reactors", 00:18:41.519 "thread_get_io_channels", 00:18:41.519 "thread_get_pollers", 00:18:41.519 "thread_get_stats", 00:18:41.519 "framework_monitor_context_switch", 00:18:41.519 "spdk_kill_instance", 00:18:41.519 "log_enable_timestamps", 00:18:41.519 "log_get_flags", 00:18:41.519 "log_clear_flag", 00:18:41.519 "log_set_flag", 00:18:41.519 "log_get_level", 00:18:41.519 "log_set_level", 00:18:41.519 "log_get_print_level", 00:18:41.519 "log_set_print_level", 00:18:41.519 "framework_enable_cpumask_locks", 00:18:41.519 "framework_disable_cpumask_locks", 00:18:41.519 "framework_wait_init", 00:18:41.519 "framework_start_init", 00:18:41.519 "scsi_get_devices", 00:18:41.519 "bdev_get_histogram", 00:18:41.519 "bdev_enable_histogram", 00:18:41.519 "bdev_set_qos_limit", 00:18:41.519 "bdev_set_qd_sampling_period", 00:18:41.519 "bdev_get_bdevs", 00:18:41.519 "bdev_reset_iostat", 00:18:41.519 "bdev_get_iostat", 00:18:41.519 "bdev_examine", 00:18:41.519 "bdev_wait_for_examine", 00:18:41.519 "bdev_set_options", 00:18:41.519 "accel_get_stats", 00:18:41.519 "accel_set_options", 00:18:41.519 "accel_set_driver", 00:18:41.519 "accel_crypto_key_destroy", 00:18:41.519 "accel_crypto_keys_get", 00:18:41.519 "accel_crypto_key_create", 00:18:41.519 "accel_assign_opc", 00:18:41.519 "accel_get_module_info", 00:18:41.519 "accel_get_opc_assignments", 00:18:41.519 "vmd_rescan", 00:18:41.519 "vmd_remove_device", 00:18:41.519 "vmd_enable", 00:18:41.519 "sock_get_default_impl", 00:18:41.519 "sock_set_default_impl", 00:18:41.519 "sock_impl_set_options", 00:18:41.519 "sock_impl_get_options", 00:18:41.519 "iobuf_get_stats", 00:18:41.519 "iobuf_set_options", 00:18:41.519 "keyring_get_keys", 00:18:41.519 "vfu_tgt_set_base_path", 00:18:41.519 "framework_get_pci_devices", 00:18:41.519 "framework_get_config", 00:18:41.519 "framework_get_subsystems", 00:18:41.519 "fsdev_set_opts", 00:18:41.519 "fsdev_get_opts", 00:18:41.519 "trace_get_info", 00:18:41.519 "trace_get_tpoint_group_mask", 00:18:41.519 "trace_disable_tpoint_group", 00:18:41.519 "trace_enable_tpoint_group", 00:18:41.519 "trace_clear_tpoint_mask", 00:18:41.519 "trace_set_tpoint_mask", 00:18:41.519 "notify_get_notifications", 00:18:41.519 "notify_get_types", 00:18:41.519 "spdk_get_version", 00:18:41.519 "rpc_get_methods" 00:18:41.519 ] 00:18:41.519 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:18:41.519 10:29:42 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.519 10:29:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:41.779 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:41.779 10:29:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 472900 00:18:41.779 10:29:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 472900 ']' 00:18:41.779 10:29:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 472900 00:18:41.779 10:29:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:18:41.779 10:29:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.779 10:29:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 472900 00:18:41.779 10:29:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.779 10:29:42 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.779 10:29:42 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 472900' 00:18:41.779 killing process with pid 472900 00:18:41.779 10:29:42 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 472900 00:18:41.779 10:29:42 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 472900 00:18:42.039 00:18:42.039 real 0m1.146s 00:18:42.039 user 0m1.914s 00:18:42.039 sys 0m0.423s 00:18:42.039 10:29:43 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.039 10:29:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.039 ************************************ 00:18:42.039 END TEST spdkcli_tcp 00:18:42.039 ************************************ 00:18:42.039 10:29:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:42.039 10:29:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:42.039 10:29:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.039 10:29:43 -- common/autotest_common.sh@10 -- # set +x 00:18:42.039 ************************************ 00:18:42.039 START TEST dpdk_mem_utility 00:18:42.039 ************************************ 00:18:42.039 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:42.299 * Looking for test storage... 00:18:42.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:18:42.299 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:42.299 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:42.299 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:18:42.299 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.299 10:29:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:18:42.299 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.299 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:42.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.299 --rc genhtml_branch_coverage=1 00:18:42.299 --rc genhtml_function_coverage=1 00:18:42.299 --rc genhtml_legend=1 00:18:42.299 --rc geninfo_all_blocks=1 00:18:42.299 --rc geninfo_unexecuted_blocks=1 00:18:42.299 00:18:42.299 ' 00:18:42.300 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:42.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.300 --rc genhtml_branch_coverage=1 00:18:42.300 --rc genhtml_function_coverage=1 00:18:42.300 --rc genhtml_legend=1 00:18:42.300 --rc geninfo_all_blocks=1 00:18:42.300 --rc geninfo_unexecuted_blocks=1 00:18:42.300 00:18:42.300 ' 00:18:42.300 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:42.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.300 --rc genhtml_branch_coverage=1 00:18:42.300 --rc genhtml_function_coverage=1 00:18:42.300 --rc genhtml_legend=1 00:18:42.300 --rc geninfo_all_blocks=1 00:18:42.300 --rc geninfo_unexecuted_blocks=1 00:18:42.300 00:18:42.300 ' 00:18:42.300 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:42.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.300 --rc genhtml_branch_coverage=1 00:18:42.300 --rc genhtml_function_coverage=1 00:18:42.300 --rc genhtml_legend=1 00:18:42.300 --rc geninfo_all_blocks=1 00:18:42.300 --rc geninfo_unexecuted_blocks=1 00:18:42.300 00:18:42.300 ' 00:18:42.300 10:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:18:42.300 10:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=473127 00:18:42.300 10:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:18:42.300 10:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 473127 00:18:42.300 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 473127 ']' 00:18:42.300 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.300 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.300 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.300 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.300 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:42.300 [2024-12-09 10:29:43.380616] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:42.300 [2024-12-09 10:29:43.380668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473127 ] 00:18:42.300 [2024-12-09 10:29:43.446447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.560 [2024-12-09 10:29:43.489992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.560 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.560 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:18:42.560 10:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:18:42.560 10:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:18:42.560 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.560 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:42.560 { 00:18:42.560 "filename": "/tmp/spdk_mem_dump.txt" 00:18:42.560 } 00:18:42.560 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.560 10:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:18:42.820 DPDK memory size 818.000000 MiB in 1 heap(s) 00:18:42.820 1 heaps totaling size 818.000000 MiB 00:18:42.820 size: 818.000000 MiB heap id: 0 00:18:42.820 end heaps---------- 00:18:42.820 9 mempools totaling size 603.782043 MiB 00:18:42.820 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:18:42.820 size: 158.602051 MiB name: PDU_data_out_Pool 00:18:42.820 size: 100.555481 MiB name: bdev_io_473127 00:18:42.820 size: 50.003479 MiB name: msgpool_473127 00:18:42.820 size: 36.509338 MiB name: fsdev_io_473127 00:18:42.820 size: 21.763794 MiB name: PDU_Pool 00:18:42.820 size: 19.513306 MiB name: SCSI_TASK_Pool 00:18:42.820 size: 4.133484 MiB name: evtpool_473127 00:18:42.820 size: 0.026123 MiB name: Session_Pool 00:18:42.820 end mempools------- 00:18:42.820 6 memzones totaling size 4.142822 MiB 00:18:42.820 size: 1.000366 MiB name: RG_ring_0_473127 00:18:42.820 size: 1.000366 MiB name: RG_ring_1_473127 00:18:42.820 size: 1.000366 MiB name: RG_ring_4_473127 00:18:42.820 size: 1.000366 MiB name: RG_ring_5_473127 00:18:42.820 size: 0.125366 MiB name: RG_ring_2_473127 00:18:42.820 size: 0.015991 MiB name: RG_ring_3_473127 00:18:42.820 end memzones------- 00:18:42.820 10:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:18:42.820 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:18:42.820 list of free elements. size: 10.852478 MiB 00:18:42.820 element at address: 0x200019200000 with size: 0.999878 MiB 00:18:42.820 element at address: 0x200019400000 with size: 0.999878 MiB 00:18:42.820 element at address: 0x200000400000 with size: 0.998535 MiB 00:18:42.820 element at address: 0x200032000000 with size: 0.994446 MiB 00:18:42.820 element at address: 0x200006400000 with size: 0.959839 MiB 00:18:42.820 element at address: 0x200012c00000 with size: 0.944275 MiB 00:18:42.820 element at address: 0x200019600000 with size: 0.936584 MiB 00:18:42.820 element at address: 0x200000200000 with size: 0.717346 MiB 00:18:42.820 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:18:42.820 element at address: 0x200000c00000 with size: 0.495422 MiB 00:18:42.820 element at address: 0x20000a600000 with size: 0.490723 MiB 00:18:42.820 element at address: 0x200019800000 with size: 0.485657 MiB 00:18:42.820 element at address: 0x200003e00000 with size: 0.481934 MiB 00:18:42.820 element at address: 0x200028200000 with size: 0.410034 MiB 00:18:42.820 element at address: 0x200000800000 with size: 0.355042 MiB 00:18:42.820 list of standard malloc elements. size: 199.218628 MiB 00:18:42.820 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:18:42.820 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:18:42.820 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:18:42.820 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:18:42.820 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:18:42.820 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:18:42.820 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:18:42.820 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:18:42.821 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:18:42.821 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20000085b040 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20000085f300 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20000087f680 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:18:42.821 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:18:42.821 element at address: 0x200000cff000 with size: 0.000183 MiB 00:18:42.821 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:18:42.821 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:18:42.821 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:18:42.821 element at address: 0x200003efb980 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:18:42.821 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:18:42.821 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:18:42.821 element at address: 0x200028268f80 with size: 0.000183 MiB 00:18:42.821 element at address: 0x200028269040 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:18:42.821 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:18:42.821 list of memzone associated elements. size: 607.928894 MiB 00:18:42.821 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:18:42.821 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:18:42.821 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:18:42.821 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:18:42.821 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:18:42.821 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_473127_0 00:18:42.821 element at address: 0x200000dff380 with size: 48.003052 MiB 00:18:42.821 associated memzone info: size: 48.002930 MiB name: MP_msgpool_473127_0 00:18:42.821 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:18:42.821 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_473127_0 00:18:42.821 element at address: 0x2000199be940 with size: 20.255554 MiB 00:18:42.821 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:18:42.821 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:18:42.821 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:18:42.821 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:18:42.821 associated memzone info: size: 3.000122 MiB name: MP_evtpool_473127_0 00:18:42.821 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:18:42.821 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_473127 00:18:42.821 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:18:42.821 associated memzone info: size: 1.007996 MiB name: MP_evtpool_473127 00:18:42.821 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:18:42.821 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:18:42.821 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:18:42.821 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:18:42.821 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:18:42.821 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:18:42.821 element at address: 0x200003efba40 with size: 1.008118 MiB 00:18:42.821 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:18:42.821 element at address: 0x200000cff180 with size: 1.000488 MiB 00:18:42.821 associated memzone info: size: 1.000366 MiB name: RG_ring_0_473127 00:18:42.821 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:18:42.821 associated memzone info: size: 1.000366 MiB name: RG_ring_1_473127 00:18:42.821 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:18:42.821 associated memzone info: size: 1.000366 MiB name: RG_ring_4_473127 00:18:42.821 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:18:42.821 associated memzone info: size: 1.000366 MiB name: RG_ring_5_473127 00:18:42.821 element at address: 0x20000087f740 with size: 0.500488 MiB 00:18:42.821 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_473127 00:18:42.821 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:18:42.821 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_473127 00:18:42.821 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:18:42.821 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:18:42.821 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:18:42.821 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:18:42.821 element at address: 0x20001987c540 with size: 0.250488 MiB 00:18:42.821 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:18:42.821 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:18:42.821 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_473127 00:18:42.821 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:18:42.821 associated memzone info: size: 0.125366 MiB name: RG_ring_2_473127 00:18:42.821 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:18:42.821 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:18:42.821 element at address: 0x200028269100 with size: 0.023743 MiB 00:18:42.821 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:18:42.821 element at address: 0x20000085b100 with size: 0.016113 MiB 00:18:42.821 associated memzone info: size: 0.015991 MiB name: RG_ring_3_473127 00:18:42.821 element at address: 0x20002826f240 with size: 0.002441 MiB 00:18:42.821 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:18:42.821 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:18:42.821 associated memzone info: size: 0.000183 MiB name: MP_msgpool_473127 00:18:42.821 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:18:42.821 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_473127 00:18:42.821 element at address: 0x20000085af00 with size: 0.000305 MiB 00:18:42.821 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_473127 00:18:42.821 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:18:42.821 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:18:42.821 10:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:18:42.821 10:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 473127 00:18:42.821 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 473127 ']' 00:18:42.821 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 473127 00:18:42.821 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:18:42.821 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.821 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 473127 00:18:42.821 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.821 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.821 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 473127' 00:18:42.821 killing process with pid 473127 00:18:42.821 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 473127 00:18:42.821 10:29:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 473127 00:18:43.081 00:18:43.081 real 0m1.022s 00:18:43.081 user 0m0.957s 00:18:43.081 sys 0m0.399s 00:18:43.081 10:29:44 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.081 10:29:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:43.081 ************************************ 00:18:43.081 END TEST dpdk_mem_utility 00:18:43.081 ************************************ 00:18:43.081 10:29:44 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:18:43.081 10:29:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.081 10:29:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.081 10:29:44 -- common/autotest_common.sh@10 -- # set +x 00:18:43.081 ************************************ 00:18:43.081 START TEST event 00:18:43.081 ************************************ 00:18:43.081 10:29:44 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:18:43.341 * Looking for test storage... 00:18:43.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:18:43.341 10:29:44 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:43.341 10:29:44 event -- common/autotest_common.sh@1711 -- # lcov --version 00:18:43.341 10:29:44 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:43.341 10:29:44 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:43.341 10:29:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.341 10:29:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.341 10:29:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.341 10:29:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.341 10:29:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.341 10:29:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.341 10:29:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.341 10:29:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.341 10:29:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.341 10:29:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.341 10:29:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.341 10:29:44 event -- scripts/common.sh@344 -- # case "$op" in 00:18:43.341 10:29:44 event -- scripts/common.sh@345 -- # : 1 00:18:43.341 10:29:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.341 10:29:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.341 10:29:44 event -- scripts/common.sh@365 -- # decimal 1 00:18:43.341 10:29:44 event -- scripts/common.sh@353 -- # local d=1 00:18:43.341 10:29:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.341 10:29:44 event -- scripts/common.sh@355 -- # echo 1 00:18:43.341 10:29:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.341 10:29:44 event -- scripts/common.sh@366 -- # decimal 2 00:18:43.341 10:29:44 event -- scripts/common.sh@353 -- # local d=2 00:18:43.341 10:29:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.341 10:29:44 event -- scripts/common.sh@355 -- # echo 2 00:18:43.341 10:29:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.341 10:29:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.341 10:29:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.341 10:29:44 event -- scripts/common.sh@368 -- # return 0 00:18:43.341 10:29:44 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.341 10:29:44 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:43.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.341 --rc genhtml_branch_coverage=1 00:18:43.341 --rc genhtml_function_coverage=1 00:18:43.341 --rc genhtml_legend=1 00:18:43.341 --rc geninfo_all_blocks=1 00:18:43.341 --rc geninfo_unexecuted_blocks=1 00:18:43.341 00:18:43.341 ' 00:18:43.341 10:29:44 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:43.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.341 --rc genhtml_branch_coverage=1 00:18:43.341 --rc genhtml_function_coverage=1 00:18:43.341 --rc genhtml_legend=1 00:18:43.341 --rc geninfo_all_blocks=1 00:18:43.341 --rc geninfo_unexecuted_blocks=1 00:18:43.341 00:18:43.341 ' 00:18:43.341 10:29:44 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:43.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.341 --rc genhtml_branch_coverage=1 00:18:43.341 --rc genhtml_function_coverage=1 00:18:43.341 --rc genhtml_legend=1 00:18:43.341 --rc geninfo_all_blocks=1 00:18:43.341 --rc geninfo_unexecuted_blocks=1 00:18:43.341 00:18:43.341 ' 00:18:43.341 10:29:44 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:43.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.341 --rc genhtml_branch_coverage=1 00:18:43.342 --rc genhtml_function_coverage=1 00:18:43.342 --rc genhtml_legend=1 00:18:43.342 --rc geninfo_all_blocks=1 00:18:43.342 --rc geninfo_unexecuted_blocks=1 00:18:43.342 00:18:43.342 ' 00:18:43.342 10:29:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:18:43.342 10:29:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:18:43.342 10:29:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:43.342 10:29:44 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:18:43.342 10:29:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.342 10:29:44 event -- common/autotest_common.sh@10 -- # set +x 00:18:43.342 ************************************ 00:18:43.342 START TEST event_perf 00:18:43.342 ************************************ 00:18:43.342 10:29:44 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:43.342 Running I/O for 1 seconds...[2024-12-09 10:29:44.463486] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:43.342 [2024-12-09 10:29:44.463551] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473289 ] 00:18:43.601 [2024-12-09 10:29:44.536066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.601 [2024-12-09 10:29:44.580308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.601 [2024-12-09 10:29:44.580402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.601 [2024-12-09 10:29:44.580490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.601 [2024-12-09 10:29:44.580492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.536 Running I/O for 1 seconds... 00:18:44.536 lcore 0: 209867 00:18:44.536 lcore 1: 209866 00:18:44.536 lcore 2: 209867 00:18:44.536 lcore 3: 209867 00:18:44.536 done. 00:18:44.536 00:18:44.536 real 0m1.219s 00:18:44.536 user 0m4.137s 00:18:44.536 sys 0m0.079s 00:18:44.536 10:29:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.536 10:29:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:18:44.536 ************************************ 00:18:44.536 END TEST event_perf 00:18:44.536 ************************************ 00:18:44.536 10:29:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:18:44.536 10:29:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:44.536 10:29:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.536 10:29:45 event -- common/autotest_common.sh@10 -- # set +x 00:18:44.797 ************************************ 00:18:44.797 START TEST event_reactor 00:18:44.797 ************************************ 00:18:44.797 10:29:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:18:44.797 [2024-12-09 10:29:45.752151] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:44.797 [2024-12-09 10:29:45.752220] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473539 ] 00:18:44.797 [2024-12-09 10:29:45.821900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.797 [2024-12-09 10:29:45.861605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.177 test_start 00:18:46.177 oneshot 00:18:46.177 tick 100 00:18:46.177 tick 100 00:18:46.177 tick 250 00:18:46.177 tick 100 00:18:46.177 tick 100 00:18:46.177 tick 100 00:18:46.177 tick 250 00:18:46.177 tick 500 00:18:46.177 tick 100 00:18:46.177 tick 100 00:18:46.177 tick 250 00:18:46.177 tick 100 00:18:46.177 tick 100 00:18:46.177 test_end 00:18:46.177 00:18:46.177 real 0m1.206s 00:18:46.177 user 0m1.141s 00:18:46.177 sys 0m0.061s 00:18:46.177 10:29:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.177 10:29:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:18:46.177 ************************************ 00:18:46.177 END TEST event_reactor 00:18:46.177 ************************************ 00:18:46.177 10:29:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:46.177 10:29:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:46.177 10:29:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.177 10:29:46 event -- common/autotest_common.sh@10 -- # set +x 00:18:46.177 ************************************ 00:18:46.177 START TEST event_reactor_perf 00:18:46.177 ************************************ 00:18:46.177 10:29:47 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:46.177 [2024-12-09 10:29:47.031252] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:46.177 [2024-12-09 10:29:47.031314] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473790 ] 00:18:46.177 [2024-12-09 10:29:47.100959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.177 [2024-12-09 10:29:47.141975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.113 test_start 00:18:47.113 test_end 00:18:47.113 Performance: 496959 events per second 00:18:47.113 00:18:47.113 real 0m1.207s 00:18:47.113 user 0m1.136s 00:18:47.113 sys 0m0.066s 00:18:47.113 10:29:48 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.113 10:29:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:18:47.113 ************************************ 00:18:47.113 END TEST event_reactor_perf 00:18:47.113 ************************************ 00:18:47.113 10:29:48 event -- event/event.sh@49 -- # uname -s 00:18:47.113 10:29:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:18:47.113 10:29:48 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:18:47.113 10:29:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:47.113 10:29:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.113 10:29:48 event -- common/autotest_common.sh@10 -- # set +x 00:18:47.371 ************************************ 00:18:47.371 START TEST event_scheduler 00:18:47.371 ************************************ 00:18:47.371 10:29:48 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:18:47.371 * Looking for test storage... 00:18:47.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:18:47.371 10:29:48 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:47.371 10:29:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:18:47.371 10:29:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:47.371 10:29:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.371 10:29:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:18:47.371 10:29:48 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.371 10:29:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.371 --rc genhtml_branch_coverage=1 00:18:47.371 --rc genhtml_function_coverage=1 00:18:47.371 --rc genhtml_legend=1 00:18:47.371 --rc geninfo_all_blocks=1 00:18:47.371 --rc geninfo_unexecuted_blocks=1 00:18:47.371 00:18:47.371 ' 00:18:47.371 10:29:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.371 --rc genhtml_branch_coverage=1 00:18:47.371 --rc genhtml_function_coverage=1 00:18:47.371 --rc genhtml_legend=1 00:18:47.371 --rc geninfo_all_blocks=1 00:18:47.371 --rc geninfo_unexecuted_blocks=1 00:18:47.371 00:18:47.371 ' 00:18:47.371 10:29:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.371 --rc genhtml_branch_coverage=1 00:18:47.371 --rc genhtml_function_coverage=1 00:18:47.371 --rc genhtml_legend=1 00:18:47.371 --rc geninfo_all_blocks=1 00:18:47.371 --rc geninfo_unexecuted_blocks=1 00:18:47.371 00:18:47.371 ' 00:18:47.371 10:29:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.371 --rc genhtml_branch_coverage=1 00:18:47.371 --rc genhtml_function_coverage=1 00:18:47.372 --rc genhtml_legend=1 00:18:47.372 --rc geninfo_all_blocks=1 00:18:47.372 --rc geninfo_unexecuted_blocks=1 00:18:47.372 00:18:47.372 ' 00:18:47.372 10:29:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:18:47.372 10:29:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:18:47.372 10:29:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=474070 00:18:47.372 10:29:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:18:47.372 10:29:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 474070 00:18:47.372 10:29:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 474070 ']' 00:18:47.372 10:29:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.372 10:29:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.372 10:29:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.372 10:29:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.372 10:29:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:47.372 [2024-12-09 10:29:48.486729] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:47.372 [2024-12-09 10:29:48.486777] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474070 ] 00:18:47.630 [2024-12-09 10:29:48.551956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:47.630 [2024-12-09 10:29:48.598264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.630 [2024-12-09 10:29:48.598284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.630 [2024-12-09 10:29:48.598360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:47.630 [2024-12-09 10:29:48.598362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.630 10:29:48 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.630 10:29:48 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:18:47.630 10:29:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:18:47.630 10:29:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.630 10:29:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:47.630 [2024-12-09 10:29:48.666994] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:18:47.630 [2024-12-09 10:29:48.667018] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:18:47.630 [2024-12-09 10:29:48.667027] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:18:47.630 [2024-12-09 10:29:48.667033] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:18:47.630 [2024-12-09 10:29:48.667038] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:18:47.630 10:29:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.630 10:29:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:18:47.631 10:29:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.631 10:29:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:47.631 [2024-12-09 10:29:48.742883] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:18:47.631 10:29:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.631 10:29:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:18:47.631 10:29:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:47.631 10:29:48 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.631 10:29:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:47.631 ************************************ 00:18:47.631 START TEST scheduler_create_thread 00:18:47.631 ************************************ 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.631 2 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.631 3 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.631 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.889 4 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.889 5 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.889 6 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.889 7 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.889 8 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.889 9 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.889 10 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:18:47.889 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.890 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:47.890 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.890 10:29:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:18:47.890 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.890 10:29:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:49.264 10:29:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.264 10:29:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:18:49.264 10:29:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:18:49.264 10:29:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.264 10:29:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.638 10:29:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.638 00:18:50.638 real 0m2.621s 00:18:50.638 user 0m0.024s 00:18:50.638 sys 0m0.005s 00:18:50.638 10:29:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.638 10:29:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.638 ************************************ 00:18:50.638 END TEST scheduler_create_thread 00:18:50.638 ************************************ 00:18:50.638 10:29:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:50.638 10:29:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 474070 00:18:50.638 10:29:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 474070 ']' 00:18:50.638 10:29:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 474070 00:18:50.638 10:29:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:18:50.638 10:29:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.639 10:29:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 474070 00:18:50.639 10:29:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:50.639 10:29:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:50.639 10:29:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 474070' 00:18:50.639 killing process with pid 474070 00:18:50.639 10:29:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 474070 00:18:50.639 10:29:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 474070 00:18:50.898 [2024-12-09 10:29:51.881230] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:18:51.157 00:18:51.157 real 0m3.795s 00:18:51.157 user 0m5.757s 00:18:51.157 sys 0m0.332s 00:18:51.157 10:29:52 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.157 10:29:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:51.157 ************************************ 00:18:51.157 END TEST event_scheduler 00:18:51.157 ************************************ 00:18:51.157 10:29:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:18:51.157 10:29:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:18:51.157 10:29:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:51.157 10:29:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.157 10:29:52 event -- common/autotest_common.sh@10 -- # set +x 00:18:51.157 ************************************ 00:18:51.157 START TEST app_repeat 00:18:51.157 ************************************ 00:18:51.157 10:29:52 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=474807 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 474807' 00:18:51.157 Process app_repeat pid: 474807 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:18:51.157 spdk_app_start Round 0 00:18:51.157 10:29:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 474807 /var/tmp/spdk-nbd.sock 00:18:51.157 10:29:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 474807 ']' 00:18:51.157 10:29:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:51.157 10:29:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.157 10:29:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:51.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:51.157 10:29:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.157 10:29:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:51.157 [2024-12-09 10:29:52.203183] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:18:51.157 [2024-12-09 10:29:52.203237] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474807 ] 00:18:51.157 [2024-12-09 10:29:52.271114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:51.157 [2024-12-09 10:29:52.321018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.157 [2024-12-09 10:29:52.321021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.416 10:29:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.416 10:29:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:18:51.416 10:29:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:51.416 Malloc0 00:18:51.675 10:29:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:51.675 Malloc1 00:18:51.675 10:29:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:51.675 10:29:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:51.934 /dev/nbd0 00:18:51.934 10:29:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:51.934 10:29:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:51.934 1+0 records in 00:18:51.934 1+0 records out 00:18:51.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196478 s, 20.8 MB/s 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:51.934 10:29:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:51.934 10:29:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:51.934 10:29:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:51.934 10:29:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:52.193 /dev/nbd1 00:18:52.193 10:29:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:52.193 10:29:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:52.193 1+0 records in 00:18:52.193 1+0 records out 00:18:52.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175586 s, 23.3 MB/s 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:52.193 10:29:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:52.193 10:29:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:52.193 10:29:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:52.193 10:29:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:52.193 10:29:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:52.193 10:29:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:52.452 { 00:18:52.452 "nbd_device": "/dev/nbd0", 00:18:52.452 "bdev_name": "Malloc0" 00:18:52.452 }, 00:18:52.452 { 00:18:52.452 "nbd_device": "/dev/nbd1", 00:18:52.452 "bdev_name": "Malloc1" 00:18:52.452 } 00:18:52.452 ]' 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:52.452 { 00:18:52.452 "nbd_device": "/dev/nbd0", 00:18:52.452 "bdev_name": "Malloc0" 00:18:52.452 }, 00:18:52.452 { 00:18:52.452 "nbd_device": "/dev/nbd1", 00:18:52.452 "bdev_name": "Malloc1" 00:18:52.452 } 00:18:52.452 ]' 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:52.452 /dev/nbd1' 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:52.452 /dev/nbd1' 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:52.452 256+0 records in 00:18:52.452 256+0 records out 00:18:52.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107375 s, 97.7 MB/s 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:52.452 256+0 records in 00:18:52.452 256+0 records out 00:18:52.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142326 s, 73.7 MB/s 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:52.452 256+0 records in 00:18:52.452 256+0 records out 00:18:52.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150497 s, 69.7 MB/s 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:52.452 10:29:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:52.711 10:29:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:52.711 10:29:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:52.711 10:29:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:52.711 10:29:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:52.711 10:29:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:52.711 10:29:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:52.711 10:29:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:52.711 10:29:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:52.711 10:29:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:52.711 10:29:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:52.970 10:29:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:53.229 10:29:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:53.229 10:29:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:53.488 10:29:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:53.746 [2024-12-09 10:29:54.692210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:53.746 [2024-12-09 10:29:54.729783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.746 [2024-12-09 10:29:54.729786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.746 [2024-12-09 10:29:54.770811] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:53.746 [2024-12-09 10:29:54.770853] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:57.036 10:29:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:57.036 10:29:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:18:57.036 spdk_app_start Round 1 00:18:57.036 10:29:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 474807 /var/tmp/spdk-nbd.sock 00:18:57.036 10:29:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 474807 ']' 00:18:57.036 10:29:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:57.036 10:29:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.036 10:29:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:57.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:57.036 10:29:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.036 10:29:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:57.036 10:29:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.036 10:29:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:18:57.036 10:29:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:57.036 Malloc0 00:18:57.036 10:29:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:57.036 Malloc1 00:18:57.036 10:29:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:57.036 10:29:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:57.294 /dev/nbd0 00:18:57.294 10:29:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:57.294 10:29:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:57.294 1+0 records in 00:18:57.294 1+0 records out 00:18:57.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196591 s, 20.8 MB/s 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:57.294 10:29:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:57.294 10:29:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:57.294 10:29:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:57.295 10:29:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:57.552 /dev/nbd1 00:18:57.552 10:29:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:57.552 10:29:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:57.552 1+0 records in 00:18:57.552 1+0 records out 00:18:57.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229908 s, 17.8 MB/s 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:57.552 10:29:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:57.552 10:29:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:57.552 10:29:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:57.552 10:29:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:57.552 10:29:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:57.552 10:29:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:57.810 { 00:18:57.810 "nbd_device": "/dev/nbd0", 00:18:57.810 "bdev_name": "Malloc0" 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "nbd_device": "/dev/nbd1", 00:18:57.810 "bdev_name": "Malloc1" 00:18:57.810 } 00:18:57.810 ]' 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:57.810 { 00:18:57.810 "nbd_device": "/dev/nbd0", 00:18:57.810 "bdev_name": "Malloc0" 00:18:57.810 }, 00:18:57.810 { 00:18:57.810 "nbd_device": "/dev/nbd1", 00:18:57.810 "bdev_name": "Malloc1" 00:18:57.810 } 00:18:57.810 ]' 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:57.810 /dev/nbd1' 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:57.810 /dev/nbd1' 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:57.810 256+0 records in 00:18:57.810 256+0 records out 00:18:57.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106655 s, 98.3 MB/s 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:57.810 256+0 records in 00:18:57.810 256+0 records out 00:18:57.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143564 s, 73.0 MB/s 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:57.810 256+0 records in 00:18:57.810 256+0 records out 00:18:57.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152512 s, 68.8 MB/s 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:57.810 10:29:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:57.811 10:29:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:58.069 10:29:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:58.069 10:29:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:58.069 10:29:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:58.069 10:29:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.069 10:29:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.069 10:29:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:58.069 10:29:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:58.069 10:29:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.069 10:29:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:58.069 10:29:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:58.328 10:29:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:58.586 10:29:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:58.586 10:29:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:58.586 10:29:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:58.586 10:29:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:58.586 10:29:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:58.586 10:29:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:58.586 10:29:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:58.586 10:29:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:58.586 10:29:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:58.586 10:29:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:58.586 10:29:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:58.844 [2024-12-09 10:29:59.894979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:58.844 [2024-12-09 10:29:59.932793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.844 [2024-12-09 10:29:59.932796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.844 [2024-12-09 10:29:59.974228] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:58.844 [2024-12-09 10:29:59.974270] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:02.131 10:30:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:19:02.131 10:30:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:19:02.131 spdk_app_start Round 2 00:19:02.131 10:30:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 474807 /var/tmp/spdk-nbd.sock 00:19:02.131 10:30:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 474807 ']' 00:19:02.131 10:30:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:02.131 10:30:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.131 10:30:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:02.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:02.131 10:30:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.131 10:30:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:02.131 10:30:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.131 10:30:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:02.131 10:30:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:02.131 Malloc0 00:19:02.131 10:30:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:02.390 Malloc1 00:19:02.390 10:30:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:02.390 /dev/nbd0 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:02.390 10:30:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:02.390 10:30:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:02.390 10:30:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:02.390 10:30:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:02.390 10:30:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:02.390 10:30:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:02.390 10:30:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:02.390 10:30:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:02.390 10:30:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:02.390 10:30:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:02.390 1+0 records in 00:19:02.390 1+0 records out 00:19:02.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022743 s, 18.0 MB/s 00:19:02.390 10:30:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:02.650 10:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.650 10:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.650 10:30:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:02.650 /dev/nbd1 00:19:02.650 10:30:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:02.650 10:30:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:02.650 1+0 records in 00:19:02.650 1+0 records out 00:19:02.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187412 s, 21.9 MB/s 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:02.650 10:30:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:02.650 10:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.650 10:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.650 10:30:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:02.650 10:30:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.650 10:30:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:02.910 { 00:19:02.910 "nbd_device": "/dev/nbd0", 00:19:02.910 "bdev_name": "Malloc0" 00:19:02.910 }, 00:19:02.910 { 00:19:02.910 "nbd_device": "/dev/nbd1", 00:19:02.910 "bdev_name": "Malloc1" 00:19:02.910 } 00:19:02.910 ]' 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:02.910 { 00:19:02.910 "nbd_device": "/dev/nbd0", 00:19:02.910 "bdev_name": "Malloc0" 00:19:02.910 }, 00:19:02.910 { 00:19:02.910 "nbd_device": "/dev/nbd1", 00:19:02.910 "bdev_name": "Malloc1" 00:19:02.910 } 00:19:02.910 ]' 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:02.910 /dev/nbd1' 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:02.910 /dev/nbd1' 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:02.910 256+0 records in 00:19:02.910 256+0 records out 00:19:02.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106357 s, 98.6 MB/s 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:02.910 10:30:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:03.169 256+0 records in 00:19:03.169 256+0 records out 00:19:03.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149021 s, 70.4 MB/s 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:03.169 256+0 records in 00:19:03.169 256+0 records out 00:19:03.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151503 s, 69.2 MB/s 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:19:03.169 10:30:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.170 10:30:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:03.170 10:30:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.445 10:30:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:03.768 10:30:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:19:03.768 10:30:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:04.079 10:30:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:19:04.079 [2024-12-09 10:30:05.156084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:04.079 [2024-12-09 10:30:05.194237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.079 [2024-12-09 10:30:05.194240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.079 [2024-12-09 10:30:05.235565] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:04.079 [2024-12-09 10:30:05.235602] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:07.367 10:30:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 474807 /var/tmp/spdk-nbd.sock 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 474807 ']' 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:07.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:07.367 10:30:08 event.app_repeat -- event/event.sh@39 -- # killprocess 474807 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 474807 ']' 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 474807 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 474807 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 474807' 00:19:07.367 killing process with pid 474807 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@973 -- # kill 474807 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@978 -- # wait 474807 00:19:07.367 spdk_app_start is called in Round 0. 00:19:07.367 Shutdown signal received, stop current app iteration 00:19:07.367 Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 reinitialization... 00:19:07.367 spdk_app_start is called in Round 1. 00:19:07.367 Shutdown signal received, stop current app iteration 00:19:07.367 Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 reinitialization... 00:19:07.367 spdk_app_start is called in Round 2. 00:19:07.367 Shutdown signal received, stop current app iteration 00:19:07.367 Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 reinitialization... 00:19:07.367 spdk_app_start is called in Round 3. 00:19:07.367 Shutdown signal received, stop current app iteration 00:19:07.367 10:30:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:19:07.367 10:30:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:19:07.367 00:19:07.367 real 0m16.215s 00:19:07.367 user 0m35.499s 00:19:07.367 sys 0m2.500s 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.367 10:30:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:07.367 ************************************ 00:19:07.367 END TEST app_repeat 00:19:07.367 ************************************ 00:19:07.367 10:30:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:19:07.367 10:30:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:19:07.367 10:30:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.367 10:30:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.367 10:30:08 event -- common/autotest_common.sh@10 -- # set +x 00:19:07.367 ************************************ 00:19:07.367 START TEST cpu_locks 00:19:07.367 ************************************ 00:19:07.367 10:30:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:19:07.367 * Looking for test storage... 00:19:07.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:19:07.367 10:30:08 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:07.367 10:30:08 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:19:07.367 10:30:08 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:07.626 10:30:08 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:07.626 10:30:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.626 10:30:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.626 10:30:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.626 10:30:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.626 10:30:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.626 10:30:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.626 10:30:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.627 10:30:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:19:07.627 10:30:08 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.627 10:30:08 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:07.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.627 --rc genhtml_branch_coverage=1 00:19:07.627 --rc genhtml_function_coverage=1 00:19:07.627 --rc genhtml_legend=1 00:19:07.627 --rc geninfo_all_blocks=1 00:19:07.627 --rc geninfo_unexecuted_blocks=1 00:19:07.627 00:19:07.627 ' 00:19:07.627 10:30:08 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:07.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.627 --rc genhtml_branch_coverage=1 00:19:07.627 --rc genhtml_function_coverage=1 00:19:07.627 --rc genhtml_legend=1 00:19:07.627 --rc geninfo_all_blocks=1 00:19:07.627 --rc geninfo_unexecuted_blocks=1 00:19:07.627 00:19:07.627 ' 00:19:07.627 10:30:08 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:07.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.627 --rc genhtml_branch_coverage=1 00:19:07.627 --rc genhtml_function_coverage=1 00:19:07.627 --rc genhtml_legend=1 00:19:07.627 --rc geninfo_all_blocks=1 00:19:07.627 --rc geninfo_unexecuted_blocks=1 00:19:07.627 00:19:07.627 ' 00:19:07.627 10:30:08 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:07.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.627 --rc genhtml_branch_coverage=1 00:19:07.627 --rc genhtml_function_coverage=1 00:19:07.627 --rc genhtml_legend=1 00:19:07.627 --rc geninfo_all_blocks=1 00:19:07.627 --rc geninfo_unexecuted_blocks=1 00:19:07.627 00:19:07.627 ' 00:19:07.627 10:30:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:19:07.627 10:30:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:19:07.627 10:30:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:19:07.627 10:30:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:19:07.627 10:30:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.627 10:30:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.627 10:30:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:07.627 ************************************ 00:19:07.627 START TEST default_locks 00:19:07.627 ************************************ 00:19:07.627 10:30:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:19:07.627 10:30:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=478160 00:19:07.627 10:30:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 478160 00:19:07.627 10:30:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 478160 ']' 00:19:07.627 10:30:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.627 10:30:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.627 10:30:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.627 10:30:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:19:07.627 10:30:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.627 10:30:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:07.627 [2024-12-09 10:30:08.682035] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:07.627 [2024-12-09 10:30:08.682084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478160 ] 00:19:07.627 [2024-12-09 10:30:08.748204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.627 [2024-12-09 10:30:08.790510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.886 10:30:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.886 10:30:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:19:07.886 10:30:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 478160 00:19:07.886 10:30:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 478160 00:19:07.886 10:30:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:08.452 lslocks: write error 00:19:08.452 10:30:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 478160 00:19:08.453 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 478160 ']' 00:19:08.453 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 478160 00:19:08.453 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:19:08.453 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.453 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 478160 00:19:08.453 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:08.453 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:08.453 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 478160' 00:19:08.453 killing process with pid 478160 00:19:08.453 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 478160 00:19:08.453 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 478160 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 478160 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 478160 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 478160 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 478160 ']' 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.711 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:08.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (478160) - No such process 00:19:08.712 ERROR: process (pid: 478160) is no longer running 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:19:08.712 00:19:08.712 real 0m1.199s 00:19:08.712 user 0m1.162s 00:19:08.712 sys 0m0.522s 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.712 10:30:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:08.712 ************************************ 00:19:08.712 END TEST default_locks 00:19:08.712 ************************************ 00:19:08.712 10:30:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:19:08.712 10:30:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:08.712 10:30:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.712 10:30:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:08.970 ************************************ 00:19:08.970 START TEST default_locks_via_rpc 00:19:08.970 ************************************ 00:19:08.970 10:30:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:19:08.970 10:30:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:19:08.970 10:30:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=478580 00:19:08.970 10:30:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 478580 00:19:08.970 10:30:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 478580 ']' 00:19:08.970 10:30:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.970 10:30:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.970 10:30:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.970 10:30:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.970 10:30:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:08.970 [2024-12-09 10:30:09.932586] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:08.970 [2024-12-09 10:30:09.932628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478580 ] 00:19:08.970 [2024-12-09 10:30:09.993160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.970 [2024-12-09 10:30:10.045223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 478580 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 478580 00:19:09.228 10:30:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 478580 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 478580 ']' 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 478580 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 478580 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 478580' 00:19:09.795 killing process with pid 478580 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 478580 00:19:09.795 10:30:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 478580 00:19:10.054 00:19:10.054 real 0m1.162s 00:19:10.054 user 0m1.151s 00:19:10.054 sys 0m0.492s 00:19:10.054 10:30:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.054 10:30:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.054 ************************************ 00:19:10.054 END TEST default_locks_via_rpc 00:19:10.054 ************************************ 00:19:10.054 10:30:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:19:10.054 10:30:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:10.054 10:30:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.054 10:30:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:10.054 ************************************ 00:19:10.054 START TEST non_locking_app_on_locked_coremask 00:19:10.054 ************************************ 00:19:10.054 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:19:10.054 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=478846 00:19:10.054 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 478846 /var/tmp/spdk.sock 00:19:10.054 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:19:10.054 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 478846 ']' 00:19:10.054 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.054 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.054 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.054 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.054 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:10.054 [2024-12-09 10:30:11.172566] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:10.054 [2024-12-09 10:30:11.172609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478846 ] 00:19:10.313 [2024-12-09 10:30:11.236679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.313 [2024-12-09 10:30:11.279001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.313 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.313 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:10.313 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=478849 00:19:10.313 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 478849 /var/tmp/spdk2.sock 00:19:10.313 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:19:10.313 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 478849 ']' 00:19:10.572 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:10.572 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.573 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:10.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:10.573 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.573 10:30:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:10.573 [2024-12-09 10:30:11.540439] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:10.573 [2024-12-09 10:30:11.540480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478849 ] 00:19:10.573 [2024-12-09 10:30:11.640329] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:10.573 [2024-12-09 10:30:11.640360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.573 [2024-12-09 10:30:11.728948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.509 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.509 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:11.509 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 478846 00:19:11.509 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:11.509 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 478846 00:19:11.769 lslocks: write error 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 478846 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 478846 ']' 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 478846 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 478846 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 478846' 00:19:11.769 killing process with pid 478846 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 478846 00:19:11.769 10:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 478846 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 478849 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 478849 ']' 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 478849 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 478849 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 478849' 00:19:12.709 killing process with pid 478849 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 478849 00:19:12.709 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 478849 00:19:12.969 00:19:12.969 real 0m2.876s 00:19:12.969 user 0m3.016s 00:19:12.969 sys 0m0.943s 00:19:12.969 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.969 10:30:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:12.969 ************************************ 00:19:12.969 END TEST non_locking_app_on_locked_coremask 00:19:12.969 ************************************ 00:19:12.969 10:30:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:19:12.969 10:30:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:12.969 10:30:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.969 10:30:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:12.969 ************************************ 00:19:12.969 START TEST locking_app_on_unlocked_coremask 00:19:12.969 ************************************ 00:19:12.969 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:19:12.969 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=479337 00:19:12.969 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 479337 /var/tmp/spdk.sock 00:19:12.969 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:19:12.969 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 479337 ']' 00:19:12.969 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.969 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.969 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.969 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.969 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:12.969 [2024-12-09 10:30:14.118236] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:12.969 [2024-12-09 10:30:14.118281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479337 ] 00:19:13.228 [2024-12-09 10:30:14.185709] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:13.228 [2024-12-09 10:30:14.185738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.228 [2024-12-09 10:30:14.226558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=479354 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 479354 /var/tmp/spdk2.sock 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 479354 ']' 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:13.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.487 10:30:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:13.487 [2024-12-09 10:30:14.483174] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:13.487 [2024-12-09 10:30:14.483221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479354 ] 00:19:13.487 [2024-12-09 10:30:14.576436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.487 [2024-12-09 10:30:14.661410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.426 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.426 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:14.426 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 479354 00:19:14.426 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 479354 00:19:14.426 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:14.685 lslocks: write error 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 479337 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 479337 ']' 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 479337 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479337 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479337' 00:19:14.685 killing process with pid 479337 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 479337 00:19:14.685 10:30:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 479337 00:19:15.255 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 479354 00:19:15.255 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 479354 ']' 00:19:15.255 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 479354 00:19:15.255 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:15.255 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.255 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479354 00:19:15.514 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.514 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.514 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479354' 00:19:15.514 killing process with pid 479354 00:19:15.514 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 479354 00:19:15.514 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 479354 00:19:15.774 00:19:15.774 real 0m2.744s 00:19:15.774 user 0m2.894s 00:19:15.774 sys 0m0.851s 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:15.774 ************************************ 00:19:15.774 END TEST locking_app_on_unlocked_coremask 00:19:15.774 ************************************ 00:19:15.774 10:30:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:19:15.774 10:30:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:15.774 10:30:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.774 10:30:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:15.774 ************************************ 00:19:15.774 START TEST locking_app_on_locked_coremask 00:19:15.774 ************************************ 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=479842 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 479842 /var/tmp/spdk.sock 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 479842 ']' 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.774 10:30:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:15.774 [2024-12-09 10:30:16.923663] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:15.774 [2024-12-09 10:30:16.923705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479842 ] 00:19:16.033 [2024-12-09 10:30:16.986822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.033 [2024-12-09 10:30:17.023640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=479851 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 479851 /var/tmp/spdk2.sock 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 479851 /var/tmp/spdk2.sock 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 479851 /var/tmp/spdk2.sock 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 479851 ']' 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:16.291 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.292 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:16.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:16.292 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.292 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:16.292 [2024-12-09 10:30:17.283073] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:16.292 [2024-12-09 10:30:17.283118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479851 ] 00:19:16.292 [2024-12-09 10:30:17.376916] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 479842 has claimed it. 00:19:16.292 [2024-12-09 10:30:17.376959] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:19:16.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (479851) - No such process 00:19:16.860 ERROR: process (pid: 479851) is no longer running 00:19:16.860 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.860 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:19:16.860 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:19:16.860 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:16.860 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:16.860 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:16.860 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 479842 00:19:16.860 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 479842 00:19:16.860 10:30:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:17.427 lslocks: write error 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 479842 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 479842 ']' 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 479842 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479842 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479842' 00:19:17.427 killing process with pid 479842 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 479842 00:19:17.427 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 479842 00:19:17.995 00:19:17.995 real 0m1.990s 00:19:17.995 user 0m2.135s 00:19:17.995 sys 0m0.642s 00:19:17.995 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.995 10:30:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:17.995 ************************************ 00:19:17.995 END TEST locking_app_on_locked_coremask 00:19:17.995 ************************************ 00:19:17.995 10:30:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:19:17.995 10:30:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:17.995 10:30:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.995 10:30:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:17.995 ************************************ 00:19:17.995 START TEST locking_overlapped_coremask 00:19:17.995 ************************************ 00:19:17.995 10:30:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:19:17.995 10:30:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=480159 00:19:17.995 10:30:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 480159 /var/tmp/spdk.sock 00:19:17.995 10:30:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:19:17.995 10:30:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 480159 ']' 00:19:17.995 10:30:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.995 10:30:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.995 10:30:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.995 10:30:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.995 10:30:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:17.995 [2024-12-09 10:30:18.977787] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:17.995 [2024-12-09 10:30:18.977830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480159 ] 00:19:17.995 [2024-12-09 10:30:19.041454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:17.995 [2024-12-09 10:30:19.086868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.995 [2024-12-09 10:30:19.086963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.995 [2024-12-09 10:30:19.086965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=480339 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 480339 /var/tmp/spdk2.sock 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 480339 /var/tmp/spdk2.sock 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 480339 /var/tmp/spdk2.sock 00:19:18.254 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 480339 ']' 00:19:18.255 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:18.255 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.255 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:18.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:18.255 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.255 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:18.255 [2024-12-09 10:30:19.353924] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:18.255 [2024-12-09 10:30:19.353970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480339 ] 00:19:18.514 [2024-12-09 10:30:19.453791] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 480159 has claimed it. 00:19:18.514 [2024-12-09 10:30:19.453828] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:19:19.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (480339) - No such process 00:19:19.080 ERROR: process (pid: 480339) is no longer running 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 480159 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 480159 ']' 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 480159 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.080 10:30:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 480159 00:19:19.080 10:30:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:19.080 10:30:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:19.080 10:30:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 480159' 00:19:19.080 killing process with pid 480159 00:19:19.080 10:30:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 480159 00:19:19.080 10:30:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 480159 00:19:19.338 00:19:19.338 real 0m1.468s 00:19:19.338 user 0m3.979s 00:19:19.338 sys 0m0.397s 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:19.338 ************************************ 00:19:19.338 END TEST locking_overlapped_coremask 00:19:19.338 ************************************ 00:19:19.338 10:30:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:19:19.338 10:30:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:19.338 10:30:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.338 10:30:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:19.338 ************************************ 00:19:19.338 START TEST locking_overlapped_coremask_via_rpc 00:19:19.338 ************************************ 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=480521 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 480521 /var/tmp/spdk.sock 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 480521 ']' 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.338 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:19.338 [2024-12-09 10:30:20.508806] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:19.338 [2024-12-09 10:30:20.508847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480521 ] 00:19:19.596 [2024-12-09 10:30:20.579007] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:19.596 [2024-12-09 10:30:20.579040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:19.596 [2024-12-09 10:30:20.622966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.596 [2024-12-09 10:30:20.623062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.596 [2024-12-09 10:30:20.623065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=480603 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 480603 /var/tmp/spdk2.sock 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 480603 ']' 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:19.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.855 10:30:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:19.855 [2024-12-09 10:30:20.889662] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:19.855 [2024-12-09 10:30:20.889709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480603 ] 00:19:19.855 [2024-12-09 10:30:20.990581] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:19.855 [2024-12-09 10:30:20.990614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.113 [2024-12-09 10:30:21.079129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.113 [2024-12-09 10:30:21.079240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.113 [2024-12-09 10:30:21.079242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:20.680 [2024-12-09 10:30:21.737069] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 480521 has claimed it. 00:19:20.680 request: 00:19:20.680 { 00:19:20.680 "method": "framework_enable_cpumask_locks", 00:19:20.680 "req_id": 1 00:19:20.680 } 00:19:20.680 Got JSON-RPC error response 00:19:20.680 response: 00:19:20.680 { 00:19:20.680 "code": -32603, 00:19:20.680 "message": "Failed to claim CPU core: 2" 00:19:20.680 } 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 480521 /var/tmp/spdk.sock 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 480521 ']' 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.680 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:20.939 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.939 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:20.939 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 480603 /var/tmp/spdk2.sock 00:19:20.939 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 480603 ']' 00:19:20.939 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:20.939 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.939 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:20.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:20.939 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.939 10:30:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:21.197 10:30:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.197 10:30:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:21.197 10:30:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:19:21.197 10:30:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:19:21.197 10:30:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:19:21.197 10:30:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:19:21.197 00:19:21.197 real 0m1.713s 00:19:21.197 user 0m0.831s 00:19:21.197 sys 0m0.135s 00:19:21.197 10:30:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.197 10:30:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:21.197 ************************************ 00:19:21.197 END TEST locking_overlapped_coremask_via_rpc 00:19:21.197 ************************************ 00:19:21.197 10:30:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:19:21.197 10:30:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 480521 ]] 00:19:21.197 10:30:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 480521 00:19:21.197 10:30:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 480521 ']' 00:19:21.197 10:30:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 480521 00:19:21.197 10:30:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:19:21.197 10:30:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.197 10:30:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 480521 00:19:21.197 10:30:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.197 10:30:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.198 10:30:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 480521' 00:19:21.198 killing process with pid 480521 00:19:21.198 10:30:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 480521 00:19:21.198 10:30:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 480521 00:19:21.456 10:30:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 480603 ]] 00:19:21.456 10:30:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 480603 00:19:21.456 10:30:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 480603 ']' 00:19:21.456 10:30:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 480603 00:19:21.456 10:30:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:19:21.456 10:30:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.456 10:30:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 480603 00:19:21.715 10:30:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:21.715 10:30:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:21.715 10:30:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 480603' 00:19:21.715 killing process with pid 480603 00:19:21.715 10:30:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 480603 00:19:21.715 10:30:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 480603 00:19:21.974 10:30:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:19:21.974 10:30:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:19:21.974 10:30:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 480521 ]] 00:19:21.974 10:30:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 480521 00:19:21.974 10:30:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 480521 ']' 00:19:21.974 10:30:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 480521 00:19:21.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (480521) - No such process 00:19:21.974 10:30:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 480521 is not found' 00:19:21.974 Process with pid 480521 is not found 00:19:21.974 10:30:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 480603 ]] 00:19:21.974 10:30:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 480603 00:19:21.974 10:30:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 480603 ']' 00:19:21.974 10:30:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 480603 00:19:21.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (480603) - No such process 00:19:21.974 10:30:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 480603 is not found' 00:19:21.974 Process with pid 480603 is not found 00:19:21.974 10:30:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:19:21.974 00:19:21.974 real 0m14.577s 00:19:21.974 user 0m25.146s 00:19:21.974 sys 0m4.930s 00:19:21.974 10:30:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.974 10:30:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:21.974 ************************************ 00:19:21.974 END TEST cpu_locks 00:19:21.974 ************************************ 00:19:21.974 00:19:21.974 real 0m38.822s 00:19:21.974 user 1m13.081s 00:19:21.974 sys 0m8.341s 00:19:21.974 10:30:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.974 10:30:23 event -- common/autotest_common.sh@10 -- # set +x 00:19:21.974 ************************************ 00:19:21.974 END TEST event 00:19:21.974 ************************************ 00:19:21.974 10:30:23 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:19:21.974 10:30:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:21.974 10:30:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.974 10:30:23 -- common/autotest_common.sh@10 -- # set +x 00:19:21.974 ************************************ 00:19:21.974 START TEST thread 00:19:21.974 ************************************ 00:19:21.974 10:30:23 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:19:22.233 * Looking for test storage... 00:19:22.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:22.233 10:30:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.233 10:30:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.233 10:30:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.233 10:30:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.233 10:30:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.233 10:30:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.233 10:30:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.233 10:30:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.233 10:30:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.233 10:30:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.233 10:30:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.233 10:30:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:19:22.233 10:30:23 thread -- scripts/common.sh@345 -- # : 1 00:19:22.233 10:30:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.233 10:30:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.233 10:30:23 thread -- scripts/common.sh@365 -- # decimal 1 00:19:22.233 10:30:23 thread -- scripts/common.sh@353 -- # local d=1 00:19:22.233 10:30:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.233 10:30:23 thread -- scripts/common.sh@355 -- # echo 1 00:19:22.233 10:30:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.233 10:30:23 thread -- scripts/common.sh@366 -- # decimal 2 00:19:22.233 10:30:23 thread -- scripts/common.sh@353 -- # local d=2 00:19:22.233 10:30:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.233 10:30:23 thread -- scripts/common.sh@355 -- # echo 2 00:19:22.233 10:30:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.233 10:30:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.233 10:30:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.233 10:30:23 thread -- scripts/common.sh@368 -- # return 0 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.233 --rc genhtml_branch_coverage=1 00:19:22.233 --rc genhtml_function_coverage=1 00:19:22.233 --rc genhtml_legend=1 00:19:22.233 --rc geninfo_all_blocks=1 00:19:22.233 --rc geninfo_unexecuted_blocks=1 00:19:22.233 00:19:22.233 ' 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.233 --rc genhtml_branch_coverage=1 00:19:22.233 --rc genhtml_function_coverage=1 00:19:22.233 --rc genhtml_legend=1 00:19:22.233 --rc geninfo_all_blocks=1 00:19:22.233 --rc geninfo_unexecuted_blocks=1 00:19:22.233 00:19:22.233 ' 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.233 --rc genhtml_branch_coverage=1 00:19:22.233 --rc genhtml_function_coverage=1 00:19:22.233 --rc genhtml_legend=1 00:19:22.233 --rc geninfo_all_blocks=1 00:19:22.233 --rc geninfo_unexecuted_blocks=1 00:19:22.233 00:19:22.233 ' 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.233 --rc genhtml_branch_coverage=1 00:19:22.233 --rc genhtml_function_coverage=1 00:19:22.233 --rc genhtml_legend=1 00:19:22.233 --rc geninfo_all_blocks=1 00:19:22.233 --rc geninfo_unexecuted_blocks=1 00:19:22.233 00:19:22.233 ' 00:19:22.233 10:30:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.233 10:30:23 thread -- common/autotest_common.sh@10 -- # set +x 00:19:22.233 ************************************ 00:19:22.233 START TEST thread_poller_perf 00:19:22.233 ************************************ 00:19:22.233 10:30:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:19:22.233 [2024-12-09 10:30:23.318630] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:22.233 [2024-12-09 10:30:23.318698] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481140 ] 00:19:22.233 [2024-12-09 10:30:23.386388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.493 [2024-12-09 10:30:23.427779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.493 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:19:23.432 [2024-12-09T09:30:24.608Z] ====================================== 00:19:23.432 [2024-12-09T09:30:24.608Z] busy:2306090586 (cyc) 00:19:23.432 [2024-12-09T09:30:24.608Z] total_run_count: 412000 00:19:23.432 [2024-12-09T09:30:24.608Z] tsc_hz: 2300000000 (cyc) 00:19:23.432 [2024-12-09T09:30:24.608Z] ====================================== 00:19:23.432 [2024-12-09T09:30:24.608Z] poller_cost: 5597 (cyc), 2433 (nsec) 00:19:23.432 00:19:23.432 real 0m1.211s 00:19:23.432 user 0m1.137s 00:19:23.432 sys 0m0.070s 00:19:23.432 10:30:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.432 10:30:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:19:23.432 ************************************ 00:19:23.432 END TEST thread_poller_perf 00:19:23.432 ************************************ 00:19:23.432 10:30:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:19:23.432 10:30:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:19:23.432 10:30:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.432 10:30:24 thread -- common/autotest_common.sh@10 -- # set +x 00:19:23.432 ************************************ 00:19:23.432 START TEST thread_poller_perf 00:19:23.432 ************************************ 00:19:23.432 10:30:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:19:23.432 [2024-12-09 10:30:24.583158] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:23.432 [2024-12-09 10:30:24.583210] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481321 ] 00:19:23.691 [2024-12-09 10:30:24.648545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.691 [2024-12-09 10:30:24.688928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.691 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:19:24.631 [2024-12-09T09:30:25.807Z] ====================================== 00:19:24.631 [2024-12-09T09:30:25.807Z] busy:2301789468 (cyc) 00:19:24.631 [2024-12-09T09:30:25.807Z] total_run_count: 5095000 00:19:24.631 [2024-12-09T09:30:25.807Z] tsc_hz: 2300000000 (cyc) 00:19:24.631 [2024-12-09T09:30:25.807Z] ====================================== 00:19:24.631 [2024-12-09T09:30:25.807Z] poller_cost: 451 (cyc), 196 (nsec) 00:19:24.631 00:19:24.631 real 0m1.193s 00:19:24.631 user 0m1.126s 00:19:24.631 sys 0m0.064s 00:19:24.631 10:30:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.631 10:30:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:19:24.631 ************************************ 00:19:24.631 END TEST thread_poller_perf 00:19:24.631 ************************************ 00:19:24.631 10:30:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:19:24.631 00:19:24.631 real 0m2.666s 00:19:24.631 user 0m2.411s 00:19:24.631 sys 0m0.266s 00:19:24.631 10:30:25 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.631 10:30:25 thread -- common/autotest_common.sh@10 -- # set +x 00:19:24.631 ************************************ 00:19:24.631 END TEST thread 00:19:24.631 ************************************ 00:19:24.890 10:30:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:19:24.890 10:30:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:19:24.890 10:30:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.890 10:30:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.890 10:30:25 -- common/autotest_common.sh@10 -- # set +x 00:19:24.890 ************************************ 00:19:24.890 START TEST app_cmdline 00:19:24.890 ************************************ 00:19:24.890 10:30:25 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:19:24.890 * Looking for test storage... 00:19:24.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:19:24.890 10:30:25 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:24.890 10:30:25 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:19:24.890 10:30:25 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:24.890 10:30:26 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.890 10:30:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:19:24.890 10:30:26 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.890 10:30:26 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:24.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.890 --rc genhtml_branch_coverage=1 00:19:24.890 --rc genhtml_function_coverage=1 00:19:24.890 --rc genhtml_legend=1 00:19:24.890 --rc geninfo_all_blocks=1 00:19:24.890 --rc geninfo_unexecuted_blocks=1 00:19:24.890 00:19:24.890 ' 00:19:24.890 10:30:26 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:24.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.890 --rc genhtml_branch_coverage=1 00:19:24.890 --rc genhtml_function_coverage=1 00:19:24.890 --rc genhtml_legend=1 00:19:24.890 --rc geninfo_all_blocks=1 00:19:24.890 --rc geninfo_unexecuted_blocks=1 00:19:24.890 00:19:24.890 ' 00:19:24.891 10:30:26 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:24.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.891 --rc genhtml_branch_coverage=1 00:19:24.891 --rc genhtml_function_coverage=1 00:19:24.891 --rc genhtml_legend=1 00:19:24.891 --rc geninfo_all_blocks=1 00:19:24.891 --rc geninfo_unexecuted_blocks=1 00:19:24.891 00:19:24.891 ' 00:19:24.891 10:30:26 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:24.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.891 --rc genhtml_branch_coverage=1 00:19:24.891 --rc genhtml_function_coverage=1 00:19:24.891 --rc genhtml_legend=1 00:19:24.891 --rc geninfo_all_blocks=1 00:19:24.891 --rc geninfo_unexecuted_blocks=1 00:19:24.891 00:19:24.891 ' 00:19:24.891 10:30:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:19:24.891 10:30:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=481689 00:19:24.891 10:30:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 481689 00:19:24.891 10:30:26 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 481689 ']' 00:19:24.891 10:30:26 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.891 10:30:26 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.891 10:30:26 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.891 10:30:26 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.891 10:30:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:24.891 10:30:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:19:25.150 [2024-12-09 10:30:26.079083] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:25.150 [2024-12-09 10:30:26.079134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481689 ] 00:19:25.150 [2024-12-09 10:30:26.144586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.150 [2024-12-09 10:30:26.187149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.408 10:30:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.408 10:30:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:19:25.408 10:30:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:19:25.408 { 00:19:25.408 "version": "SPDK v25.01-pre git sha1 b920049a1", 00:19:25.408 "fields": { 00:19:25.408 "major": 25, 00:19:25.408 "minor": 1, 00:19:25.408 "patch": 0, 00:19:25.408 "suffix": "-pre", 00:19:25.409 "commit": "b920049a1" 00:19:25.409 } 00:19:25.409 } 00:19:25.409 10:30:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:19:25.409 10:30:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:19:25.409 10:30:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:19:25.409 10:30:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:19:25.409 10:30:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:19:25.409 10:30:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:19:25.409 10:30:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:19:25.409 10:30:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.409 10:30:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:25.409 10:30:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.667 10:30:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:19:25.667 10:30:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:19:25.667 10:30:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:25.667 request: 00:19:25.667 { 00:19:25.667 "method": "env_dpdk_get_mem_stats", 00:19:25.667 "req_id": 1 00:19:25.667 } 00:19:25.667 Got JSON-RPC error response 00:19:25.667 response: 00:19:25.667 { 00:19:25.667 "code": -32601, 00:19:25.667 "message": "Method not found" 00:19:25.667 } 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.667 10:30:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 481689 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 481689 ']' 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 481689 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.667 10:30:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 481689 00:19:25.925 10:30:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.925 10:30:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.925 10:30:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 481689' 00:19:25.925 killing process with pid 481689 00:19:25.925 10:30:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 481689 00:19:25.925 10:30:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 481689 00:19:26.184 00:19:26.184 real 0m1.332s 00:19:26.184 user 0m1.570s 00:19:26.184 sys 0m0.412s 00:19:26.184 10:30:27 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.184 10:30:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:26.184 ************************************ 00:19:26.184 END TEST app_cmdline 00:19:26.184 ************************************ 00:19:26.184 10:30:27 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:19:26.184 10:30:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:26.184 10:30:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.184 10:30:27 -- common/autotest_common.sh@10 -- # set +x 00:19:26.184 ************************************ 00:19:26.184 START TEST version 00:19:26.184 ************************************ 00:19:26.184 10:30:27 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:19:26.184 * Looking for test storage... 00:19:26.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:19:26.184 10:30:27 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:26.184 10:30:27 version -- common/autotest_common.sh@1711 -- # lcov --version 00:19:26.184 10:30:27 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:26.441 10:30:27 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:26.441 10:30:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.441 10:30:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.441 10:30:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.441 10:30:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.441 10:30:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.441 10:30:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.441 10:30:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.441 10:30:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.441 10:30:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.441 10:30:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.441 10:30:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.441 10:30:27 version -- scripts/common.sh@344 -- # case "$op" in 00:19:26.441 10:30:27 version -- scripts/common.sh@345 -- # : 1 00:19:26.441 10:30:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.441 10:30:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.441 10:30:27 version -- scripts/common.sh@365 -- # decimal 1 00:19:26.441 10:30:27 version -- scripts/common.sh@353 -- # local d=1 00:19:26.441 10:30:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.441 10:30:27 version -- scripts/common.sh@355 -- # echo 1 00:19:26.441 10:30:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.441 10:30:27 version -- scripts/common.sh@366 -- # decimal 2 00:19:26.441 10:30:27 version -- scripts/common.sh@353 -- # local d=2 00:19:26.441 10:30:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.441 10:30:27 version -- scripts/common.sh@355 -- # echo 2 00:19:26.441 10:30:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.441 10:30:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.441 10:30:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.441 10:30:27 version -- scripts/common.sh@368 -- # return 0 00:19:26.441 10:30:27 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.441 10:30:27 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:26.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.441 --rc genhtml_branch_coverage=1 00:19:26.441 --rc genhtml_function_coverage=1 00:19:26.441 --rc genhtml_legend=1 00:19:26.441 --rc geninfo_all_blocks=1 00:19:26.441 --rc geninfo_unexecuted_blocks=1 00:19:26.441 00:19:26.441 ' 00:19:26.441 10:30:27 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:26.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.441 --rc genhtml_branch_coverage=1 00:19:26.441 --rc genhtml_function_coverage=1 00:19:26.441 --rc genhtml_legend=1 00:19:26.441 --rc geninfo_all_blocks=1 00:19:26.441 --rc geninfo_unexecuted_blocks=1 00:19:26.441 00:19:26.441 ' 00:19:26.441 10:30:27 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:26.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.441 --rc genhtml_branch_coverage=1 00:19:26.441 --rc genhtml_function_coverage=1 00:19:26.441 --rc genhtml_legend=1 00:19:26.441 --rc geninfo_all_blocks=1 00:19:26.441 --rc geninfo_unexecuted_blocks=1 00:19:26.441 00:19:26.441 ' 00:19:26.441 10:30:27 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:26.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.442 --rc genhtml_branch_coverage=1 00:19:26.442 --rc genhtml_function_coverage=1 00:19:26.442 --rc genhtml_legend=1 00:19:26.442 --rc geninfo_all_blocks=1 00:19:26.442 --rc geninfo_unexecuted_blocks=1 00:19:26.442 00:19:26.442 ' 00:19:26.442 10:30:27 version -- app/version.sh@17 -- # get_header_version major 00:19:26.442 10:30:27 version -- app/version.sh@14 -- # cut -f2 00:19:26.442 10:30:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:19:26.442 10:30:27 version -- app/version.sh@14 -- # tr -d '"' 00:19:26.442 10:30:27 version -- app/version.sh@17 -- # major=25 00:19:26.442 10:30:27 version -- app/version.sh@18 -- # get_header_version minor 00:19:26.442 10:30:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:19:26.442 10:30:27 version -- app/version.sh@14 -- # cut -f2 00:19:26.442 10:30:27 version -- app/version.sh@14 -- # tr -d '"' 00:19:26.442 10:30:27 version -- app/version.sh@18 -- # minor=1 00:19:26.442 10:30:27 version -- app/version.sh@19 -- # get_header_version patch 00:19:26.442 10:30:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:19:26.442 10:30:27 version -- app/version.sh@14 -- # cut -f2 00:19:26.442 10:30:27 version -- app/version.sh@14 -- # tr -d '"' 00:19:26.442 10:30:27 version -- app/version.sh@19 -- # patch=0 00:19:26.442 10:30:27 version -- app/version.sh@20 -- # get_header_version suffix 00:19:26.442 10:30:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:19:26.442 10:30:27 version -- app/version.sh@14 -- # cut -f2 00:19:26.442 10:30:27 version -- app/version.sh@14 -- # tr -d '"' 00:19:26.442 10:30:27 version -- app/version.sh@20 -- # suffix=-pre 00:19:26.442 10:30:27 version -- app/version.sh@22 -- # version=25.1 00:19:26.442 10:30:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:19:26.442 10:30:27 version -- app/version.sh@28 -- # version=25.1rc0 00:19:26.442 10:30:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:19:26.442 10:30:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:19:26.442 10:30:27 version -- app/version.sh@30 -- # py_version=25.1rc0 00:19:26.442 10:30:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:19:26.442 00:19:26.442 real 0m0.242s 00:19:26.442 user 0m0.148s 00:19:26.442 sys 0m0.137s 00:19:26.442 10:30:27 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.442 10:30:27 version -- common/autotest_common.sh@10 -- # set +x 00:19:26.442 ************************************ 00:19:26.442 END TEST version 00:19:26.442 ************************************ 00:19:26.442 10:30:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:19:26.442 10:30:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:19:26.442 10:30:27 -- spdk/autotest.sh@194 -- # uname -s 00:19:26.442 10:30:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:26.442 10:30:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:26.442 10:30:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:26.442 10:30:27 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:26.442 10:30:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:26.442 10:30:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:26.442 10:30:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.442 10:30:27 -- common/autotest_common.sh@10 -- # set +x 00:19:26.442 10:30:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:26.442 10:30:27 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:26.442 10:30:27 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:19:26.442 10:30:27 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:19:26.442 10:30:27 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:19:26.442 10:30:27 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:19:26.442 10:30:27 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:19:26.442 10:30:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.442 10:30:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.442 10:30:27 -- common/autotest_common.sh@10 -- # set +x 00:19:26.442 ************************************ 00:19:26.442 START TEST nvmf_tcp 00:19:26.442 ************************************ 00:19:26.442 10:30:27 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:19:26.700 * Looking for test storage... 00:19:26.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:19:26.700 10:30:27 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:26.700 10:30:27 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:19:26.700 10:30:27 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:26.700 10:30:27 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.700 10:30:27 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:19:26.700 10:30:27 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.700 10:30:27 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.700 --rc genhtml_branch_coverage=1 00:19:26.700 --rc genhtml_function_coverage=1 00:19:26.700 --rc genhtml_legend=1 00:19:26.700 --rc geninfo_all_blocks=1 00:19:26.700 --rc geninfo_unexecuted_blocks=1 00:19:26.700 00:19:26.700 ' 00:19:26.700 10:30:27 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.700 --rc genhtml_branch_coverage=1 00:19:26.700 --rc genhtml_function_coverage=1 00:19:26.700 --rc genhtml_legend=1 00:19:26.700 --rc geninfo_all_blocks=1 00:19:26.700 --rc geninfo_unexecuted_blocks=1 00:19:26.700 00:19:26.700 ' 00:19:26.700 10:30:27 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.700 --rc genhtml_branch_coverage=1 00:19:26.700 --rc genhtml_function_coverage=1 00:19:26.700 --rc genhtml_legend=1 00:19:26.700 --rc geninfo_all_blocks=1 00:19:26.700 --rc geninfo_unexecuted_blocks=1 00:19:26.700 00:19:26.700 ' 00:19:26.700 10:30:27 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.700 --rc genhtml_branch_coverage=1 00:19:26.700 --rc genhtml_function_coverage=1 00:19:26.700 --rc genhtml_legend=1 00:19:26.700 --rc geninfo_all_blocks=1 00:19:26.700 --rc geninfo_unexecuted_blocks=1 00:19:26.700 00:19:26.701 ' 00:19:26.701 10:30:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:19:26.701 10:30:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:19:26.701 10:30:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:19:26.701 10:30:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.701 10:30:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.701 10:30:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:26.701 ************************************ 00:19:26.701 START TEST nvmf_target_core 00:19:26.701 ************************************ 00:19:26.701 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:19:26.701 * Looking for test storage... 00:19:26.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.959 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:26.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.960 --rc genhtml_branch_coverage=1 00:19:26.960 --rc genhtml_function_coverage=1 00:19:26.960 --rc genhtml_legend=1 00:19:26.960 --rc geninfo_all_blocks=1 00:19:26.960 --rc geninfo_unexecuted_blocks=1 00:19:26.960 00:19:26.960 ' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:26.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.960 --rc genhtml_branch_coverage=1 00:19:26.960 --rc genhtml_function_coverage=1 00:19:26.960 --rc genhtml_legend=1 00:19:26.960 --rc geninfo_all_blocks=1 00:19:26.960 --rc geninfo_unexecuted_blocks=1 00:19:26.960 00:19:26.960 ' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:26.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.960 --rc genhtml_branch_coverage=1 00:19:26.960 --rc genhtml_function_coverage=1 00:19:26.960 --rc genhtml_legend=1 00:19:26.960 --rc geninfo_all_blocks=1 00:19:26.960 --rc geninfo_unexecuted_blocks=1 00:19:26.960 00:19:26.960 ' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:26.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.960 --rc genhtml_branch_coverage=1 00:19:26.960 --rc genhtml_function_coverage=1 00:19:26.960 --rc genhtml_legend=1 00:19:26.960 --rc geninfo_all_blocks=1 00:19:26.960 --rc geninfo_unexecuted_blocks=1 00:19:26.960 00:19:26.960 ' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.960 10:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:26.960 ************************************ 00:19:26.960 START TEST nvmf_abort 00:19:26.960 ************************************ 00:19:26.960 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:26.960 * Looking for test storage... 00:19:26.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:26.960 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:26.960 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:19:26.960 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.219 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.220 --rc genhtml_branch_coverage=1 00:19:27.220 --rc genhtml_function_coverage=1 00:19:27.220 --rc genhtml_legend=1 00:19:27.220 --rc geninfo_all_blocks=1 00:19:27.220 --rc geninfo_unexecuted_blocks=1 00:19:27.220 00:19:27.220 ' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.220 --rc genhtml_branch_coverage=1 00:19:27.220 --rc genhtml_function_coverage=1 00:19:27.220 --rc genhtml_legend=1 00:19:27.220 --rc geninfo_all_blocks=1 00:19:27.220 --rc geninfo_unexecuted_blocks=1 00:19:27.220 00:19:27.220 ' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.220 --rc genhtml_branch_coverage=1 00:19:27.220 --rc genhtml_function_coverage=1 00:19:27.220 --rc genhtml_legend=1 00:19:27.220 --rc geninfo_all_blocks=1 00:19:27.220 --rc geninfo_unexecuted_blocks=1 00:19:27.220 00:19:27.220 ' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.220 --rc genhtml_branch_coverage=1 00:19:27.220 --rc genhtml_function_coverage=1 00:19:27.220 --rc genhtml_legend=1 00:19:27.220 --rc geninfo_all_blocks=1 00:19:27.220 --rc geninfo_unexecuted_blocks=1 00:19:27.220 00:19:27.220 ' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:27.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:19:27.220 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:32.496 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:32.496 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:32.496 Found net devices under 0000:86:00.0: cvl_0_0 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.496 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:32.497 Found net devices under 0000:86:00.1: cvl_0_1 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:32.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:19:32.497 00:19:32.497 --- 10.0.0.2 ping statistics --- 00:19:32.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.497 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:19:32.497 00:19:32.497 --- 10.0.0.1 ping statistics --- 00:19:32.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.497 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=485173 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 485173 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 485173 ']' 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:32.497 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:32.757 [2024-12-09 10:30:33.688490] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:32.757 [2024-12-09 10:30:33.688535] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.757 [2024-12-09 10:30:33.758224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:32.757 [2024-12-09 10:30:33.801613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.757 [2024-12-09 10:30:33.801649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.757 [2024-12-09 10:30:33.801657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.757 [2024-12-09 10:30:33.801663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.757 [2024-12-09 10:30:33.801668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.757 [2024-12-09 10:30:33.802963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.757 [2024-12-09 10:30:33.803056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.757 [2024-12-09 10:30:33.803109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.757 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.757 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:19:32.757 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:32.757 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.757 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:33.016 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.016 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:19:33.016 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.016 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:33.016 [2024-12-09 10:30:33.940571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.016 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:33.017 Malloc0 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:33.017 Delay0 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.017 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:33.017 [2024-12-09 10:30:34.012415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.017 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:19:33.017 [2024-12-09 10:30:34.128717] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:35.551 Initializing NVMe Controllers 00:19:35.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:35.552 controller IO queue size 128 less than required 00:19:35.552 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:19:35.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:35.552 Initialization complete. Launching workers. 00:19:35.552 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36610 00:19:35.552 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36671, failed to submit 62 00:19:35.552 success 36614, unsuccessful 57, failed 0 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:35.552 rmmod nvme_tcp 00:19:35.552 rmmod nvme_fabrics 00:19:35.552 rmmod nvme_keyring 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 485173 ']' 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 485173 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 485173 ']' 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 485173 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 485173 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 485173' 00:19:35.552 killing process with pid 485173 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 485173 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 485173 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.552 10:30:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:38.088 00:19:38.088 real 0m10.681s 00:19:38.088 user 0m11.497s 00:19:38.088 sys 0m5.120s 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:38.088 ************************************ 00:19:38.088 END TEST nvmf_abort 00:19:38.088 ************************************ 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:38.088 ************************************ 00:19:38.088 START TEST nvmf_ns_hotplug_stress 00:19:38.088 ************************************ 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:38.088 * Looking for test storage... 00:19:38.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.088 --rc genhtml_branch_coverage=1 00:19:38.088 --rc genhtml_function_coverage=1 00:19:38.088 --rc genhtml_legend=1 00:19:38.088 --rc geninfo_all_blocks=1 00:19:38.088 --rc geninfo_unexecuted_blocks=1 00:19:38.088 00:19:38.088 ' 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.088 --rc genhtml_branch_coverage=1 00:19:38.088 --rc genhtml_function_coverage=1 00:19:38.088 --rc genhtml_legend=1 00:19:38.088 --rc geninfo_all_blocks=1 00:19:38.088 --rc geninfo_unexecuted_blocks=1 00:19:38.088 00:19:38.088 ' 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.088 --rc genhtml_branch_coverage=1 00:19:38.088 --rc genhtml_function_coverage=1 00:19:38.088 --rc genhtml_legend=1 00:19:38.088 --rc geninfo_all_blocks=1 00:19:38.088 --rc geninfo_unexecuted_blocks=1 00:19:38.088 00:19:38.088 ' 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.088 --rc genhtml_branch_coverage=1 00:19:38.088 --rc genhtml_function_coverage=1 00:19:38.088 --rc genhtml_legend=1 00:19:38.088 --rc geninfo_all_blocks=1 00:19:38.088 --rc geninfo_unexecuted_blocks=1 00:19:38.088 00:19:38.088 ' 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.088 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:38.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:19:38.089 10:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.364 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:43.364 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:19:43.364 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:43.365 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:43.365 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:43.365 Found net devices under 0000:86:00.0: cvl_0_0 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:43.365 Found net devices under 0000:86:00.1: cvl_0_1 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.365 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:43.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:19:43.625 00:19:43.625 --- 10.0.0.2 ping statistics --- 00:19:43.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.625 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:19:43.625 00:19:43.625 --- 10.0.0.1 ping statistics --- 00:19:43.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.625 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=489190 00:19:43.625 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 489190 00:19:43.626 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:43.626 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 489190 ']' 00:19:43.626 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.626 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.626 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.626 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.626 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.626 [2024-12-09 10:30:44.762614] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:19:43.626 [2024-12-09 10:30:44.762658] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.886 [2024-12-09 10:30:44.833467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:43.886 [2024-12-09 10:30:44.874995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.886 [2024-12-09 10:30:44.875051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.886 [2024-12-09 10:30:44.875059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.886 [2024-12-09 10:30:44.875065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.886 [2024-12-09 10:30:44.875070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.886 [2024-12-09 10:30:44.876350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.886 [2024-12-09 10:30:44.876421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.886 [2024-12-09 10:30:44.876424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.886 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.886 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:19:43.886 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:43.886 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.886 10:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.886 10:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.886 10:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:19:43.886 10:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:44.144 [2024-12-09 10:30:45.198610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.144 10:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:44.403 10:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.661 [2024-12-09 10:30:45.604071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.661 10:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:44.661 10:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:19:44.920 Malloc0 00:19:44.920 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:45.179 Delay0 00:19:45.179 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:45.438 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:19:45.697 NULL1 00:19:45.697 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:45.697 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=489673 00:19:45.697 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:19:45.697 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:45.697 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.955 Read completed with error (sct=0, sc=11) 00:19:45.955 10:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:45.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:45.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:46.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:46.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:46.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:46.214 10:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:19:46.214 10:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:19:46.473 true 00:19:46.473 10:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:46.473 10:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:47.408 10:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:47.408 10:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:19:47.408 10:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:19:47.666 true 00:19:47.667 10:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:47.667 10:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:47.924 10:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:47.924 10:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:19:47.924 10:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:19:48.182 true 00:19:48.182 10:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:48.182 10:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:49.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:49.555 10:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:49.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:49.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:49.555 10:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:19:49.555 10:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:19:49.813 true 00:19:49.813 10:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:49.813 10:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:49.813 10:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:50.071 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:19:50.071 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:19:50.390 true 00:19:50.390 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:50.390 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:51.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:51.417 10:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:51.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:51.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:51.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:51.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:51.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:51.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:51.676 10:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:19:51.676 10:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:19:51.934 true 00:19:51.934 10:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:51.934 10:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.871 10:30:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:52.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:52.871 10:30:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:19:52.871 10:30:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:19:53.130 true 00:19:53.130 10:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:53.130 10:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:53.390 10:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:53.390 10:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:19:53.390 10:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:19:53.649 true 00:19:53.649 10:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:53.649 10:30:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:55.022 10:30:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:55.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:55.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:55.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:55.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:55.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:55.022 10:30:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:19:55.022 10:30:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:19:55.280 true 00:19:55.280 10:30:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:55.280 10:30:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:56.212 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:56.212 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:19:56.213 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:19:56.471 true 00:19:56.471 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:56.471 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:56.729 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:56.987 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:19:56.987 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:19:56.987 true 00:19:56.987 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:56.987 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:58.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:58.361 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:58.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:58.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:58.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:58.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:58.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:58.361 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:19:58.361 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:19:58.619 true 00:19:58.619 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:58.619 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:59.553 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:59.553 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:19:59.553 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:19:59.813 true 00:19:59.813 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:19:59.813 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:00.073 10:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:00.331 10:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:20:00.331 10:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:20:00.331 true 00:20:00.331 10:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:00.331 10:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:01.709 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:01.709 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:20:01.709 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:20:01.709 true 00:20:01.709 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:01.709 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.968 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:02.227 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:20:02.227 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:20:02.486 true 00:20:02.486 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:02.486 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:03.422 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:03.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:03.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:03.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:03.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:03.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:03.681 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:20:03.681 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:20:03.939 true 00:20:03.939 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:03.939 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:04.873 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:04.873 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:20:04.873 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:20:05.132 true 00:20:05.132 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:05.132 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:05.391 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:05.664 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:20:05.664 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:20:05.664 true 00:20:05.664 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:05.664 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:07.037 10:31:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:07.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:07.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:07.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:07.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:07.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:07.037 10:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:20:07.037 10:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:20:07.295 true 00:20:07.295 10:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:07.295 10:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:08.229 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:08.229 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:20:08.229 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:20:08.487 true 00:20:08.487 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:08.487 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.487 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:08.746 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:20:08.746 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:20:09.005 true 00:20:09.005 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:09.005 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:10.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:10.383 10:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:10.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:10.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:10.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:10.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:10.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:10.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:10.383 10:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:20:10.383 10:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:20:10.641 true 00:20:10.641 10:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:10.641 10:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:11.577 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:11.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:11.577 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:20:11.577 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:20:11.837 true 00:20:11.837 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:11.837 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:12.095 10:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:12.095 10:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:20:12.095 10:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:20:12.353 true 00:20:12.353 10:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:12.353 10:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:13.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:13.726 10:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:13.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:13.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:13.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:13.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:13.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:13.726 10:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:20:13.726 10:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:20:13.997 true 00:20:13.997 10:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:13.997 10:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:14.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:14.933 10:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:14.933 10:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:20:14.933 10:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:20:15.190 true 00:20:15.190 10:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:15.190 10:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:15.448 10:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:15.448 10:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:20:15.448 10:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:20:15.706 true 00:20:15.706 10:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:15.706 10:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:17.078 Initializing NVMe Controllers 00:20:17.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.078 Controller IO queue size 128, less than required. 00:20:17.078 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:17.078 Controller IO queue size 128, less than required. 00:20:17.078 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:17.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:17.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:17.078 Initialization complete. Launching workers. 00:20:17.078 ======================================================== 00:20:17.078 Latency(us) 00:20:17.078 Device Information : IOPS MiB/s Average min max 00:20:17.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1996.55 0.97 44302.29 1935.23 1013015.54 00:20:17.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17045.33 8.32 7509.38 2790.44 379570.70 00:20:17.078 ======================================================== 00:20:17.078 Total : 19041.88 9.30 11367.14 1935.23 1013015.54 00:20:17.078 00:20:17.078 10:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:17.078 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:20:17.078 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:20:17.335 true 00:20:17.335 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 489673 00:20:17.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (489673) - No such process 00:20:17.335 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 489673 00:20:17.335 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:17.335 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:17.593 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:20:17.593 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:20:17.593 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:20:17.593 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:17.593 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:20:17.850 null0 00:20:17.850 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:17.850 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:17.850 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:20:18.109 null1 00:20:18.109 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:18.109 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:18.109 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:20:18.109 null2 00:20:18.367 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:18.367 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:18.367 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:20:18.367 null3 00:20:18.367 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:18.367 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:18.367 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:20:18.625 null4 00:20:18.625 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:18.625 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:18.625 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:20:18.885 null5 00:20:18.885 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:18.885 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:18.885 10:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:20:19.145 null6 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:20:19.145 null7 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:19.145 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:20:19.146 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:20:19.405 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 495296 495298 495299 495301 495303 495305 495307 495308 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:19.406 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:19.665 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:19.924 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:19.924 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:19.924 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:19.924 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:19.924 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:19.924 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:19.925 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:19.925 10:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.183 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.441 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:20.699 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:20.699 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:20.699 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:20.699 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.699 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:20.699 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:20.699 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:20.699 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:20.957 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.957 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.957 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:20.957 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.957 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.957 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:20.957 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:21.215 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:21.215 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:21.215 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:21.215 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:21.215 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:21.216 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:21.216 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:21.216 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:21.475 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:21.733 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:21.992 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:21.992 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:21.992 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:21.992 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:21.992 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:21.992 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:21.992 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:21.992 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.250 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.509 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:22.767 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:22.768 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.026 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:23.283 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:23.283 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:23.283 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:23.283 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:23.283 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:23.283 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:23.283 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:23.283 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:23.541 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.541 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.541 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.541 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.541 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:23.542 rmmod nvme_tcp 00:20:23.542 rmmod nvme_fabrics 00:20:23.542 rmmod nvme_keyring 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 489190 ']' 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 489190 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 489190 ']' 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 489190 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 489190 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 489190' 00:20:23.542 killing process with pid 489190 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 489190 00:20:23.542 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 489190 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.805 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.340 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:26.340 00:20:26.340 real 0m48.167s 00:20:26.340 user 3m17.186s 00:20:26.340 sys 0m15.456s 00:20:26.340 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.340 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:20:26.340 ************************************ 00:20:26.340 END TEST nvmf_ns_hotplug_stress 00:20:26.340 ************************************ 00:20:26.340 10:31:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:26.340 10:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:26.340 10:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.340 10:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:20:26.340 ************************************ 00:20:26.340 START TEST nvmf_delete_subsystem 00:20:26.340 ************************************ 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:26.340 * Looking for test storage... 00:20:26.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:20:26.340 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:26.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.341 --rc genhtml_branch_coverage=1 00:20:26.341 --rc genhtml_function_coverage=1 00:20:26.341 --rc genhtml_legend=1 00:20:26.341 --rc geninfo_all_blocks=1 00:20:26.341 --rc geninfo_unexecuted_blocks=1 00:20:26.341 00:20:26.341 ' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:26.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.341 --rc genhtml_branch_coverage=1 00:20:26.341 --rc genhtml_function_coverage=1 00:20:26.341 --rc genhtml_legend=1 00:20:26.341 --rc geninfo_all_blocks=1 00:20:26.341 --rc geninfo_unexecuted_blocks=1 00:20:26.341 00:20:26.341 ' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:26.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.341 --rc genhtml_branch_coverage=1 00:20:26.341 --rc genhtml_function_coverage=1 00:20:26.341 --rc genhtml_legend=1 00:20:26.341 --rc geninfo_all_blocks=1 00:20:26.341 --rc geninfo_unexecuted_blocks=1 00:20:26.341 00:20:26.341 ' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:26.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.341 --rc genhtml_branch_coverage=1 00:20:26.341 --rc genhtml_function_coverage=1 00:20:26.341 --rc genhtml_legend=1 00:20:26.341 --rc geninfo_all_blocks=1 00:20:26.341 --rc geninfo_unexecuted_blocks=1 00:20:26.341 00:20:26.341 ' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:26.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.341 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.342 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:26.342 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:26.342 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:20:26.342 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:31.614 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:31.614 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:31.615 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:31.615 Found net devices under 0000:86:00.0: cvl_0_0 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:31.615 Found net devices under 0000:86:00.1: cvl_0_1 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:31.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:20:31.615 00:20:31.615 --- 10.0.0.2 ping statistics --- 00:20:31.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.615 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:31.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:20:31.615 00:20:31.615 --- 10.0.0.1 ping statistics --- 00:20:31.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.615 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=499684 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 499684 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 499684 ']' 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.615 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:31.615 [2024-12-09 10:31:32.696722] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:20:31.615 [2024-12-09 10:31:32.696768] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.615 [2024-12-09 10:31:32.767447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:31.873 [2024-12-09 10:31:32.810078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.873 [2024-12-09 10:31:32.810112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.874 [2024-12-09 10:31:32.810122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.874 [2024-12-09 10:31:32.810128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.874 [2024-12-09 10:31:32.810134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.874 [2024-12-09 10:31:32.811268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.874 [2024-12-09 10:31:32.811272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:31.874 [2024-12-09 10:31:32.953210] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:31.874 [2024-12-09 10:31:32.973420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:31.874 NULL1 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:31.874 Delay0 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.874 10:31:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:31.874 10:31:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.874 10:31:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=499707 00:20:31.874 10:31:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:20:31.874 10:31:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:32.132 [2024-12-09 10:31:33.075211] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:34.035 10:31:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.035 10:31:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.035 10:31:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 starting I/O failed: -6 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 starting I/O failed: -6 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 starting I/O failed: -6 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 starting I/O failed: -6 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 starting I/O failed: -6 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 starting I/O failed: -6 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 starting I/O failed: -6 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 starting I/O failed: -6 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 [2024-12-09 10:31:35.236325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa19c000c40 is same with the state(6) to be set 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.294 Write completed with error (sct=0, sc=8) 00:20:34.294 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 [2024-12-09 10:31:35.237855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa19c00d020 is same with the state(6) to be set 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 starting I/O failed: -6 00:20:34.295 [2024-12-09 10:31:35.238275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483680 is same with the state(6) to be set 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Write completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 Read completed with error (sct=0, sc=8) 00:20:34.295 [2024-12-09 10:31:35.238481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa19c00d350 is same with the state(6) to be set 00:20:35.230 [2024-12-09 10:31:36.212500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24849b0 is same with the state(6) to be set 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 [2024-12-09 10:31:36.239894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa19c00d680 is same with the state(6) to be set 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 [2024-12-09 10:31:36.240918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24834a0 is same with the state(6) to be set 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.230 Read completed with error (sct=0, sc=8) 00:20:35.230 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 [2024-12-09 10:31:36.241090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24832c0 is same with the state(6) to be set 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 Write completed with error (sct=0, sc=8) 00:20:35.231 Read completed with error (sct=0, sc=8) 00:20:35.231 [2024-12-09 10:31:36.241621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483860 is same with the state(6) to be set 00:20:35.231 Initializing NVMe Controllers 00:20:35.231 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.231 Controller IO queue size 128, less than required. 00:20:35.231 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:35.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:35.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:35.231 Initialization complete. Launching workers. 00:20:35.231 ======================================================== 00:20:35.231 Latency(us) 00:20:35.231 Device Information : IOPS MiB/s Average min max 00:20:35.231 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.95 0.09 947980.67 670.10 1012753.79 00:20:35.231 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 140.36 0.07 939462.62 1538.00 1011879.66 00:20:35.231 ======================================================== 00:20:35.231 Total : 331.32 0.16 944371.98 670.10 1012753.79 00:20:35.231 00:20:35.231 [2024-12-09 10:31:36.242196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24849b0 (9): Bad file descriptor 00:20:35.231 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.231 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:20:35.231 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 499707 00:20:35.231 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:20:35.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 499707 00:20:35.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (499707) - No such process 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 499707 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 499707 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 499707 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:35.803 [2024-12-09 10:31:36.773154] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=500398 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 500398 00:20:35.803 10:31:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:35.803 [2024-12-09 10:31:36.847204] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:36.166 10:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:36.166 10:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 500398 00:20:36.166 10:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:36.804 10:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:36.804 10:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 500398 00:20:36.804 10:31:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:37.372 10:31:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:37.372 10:31:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 500398 00:20:37.372 10:31:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:37.631 10:31:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:37.631 10:31:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 500398 00:20:37.631 10:31:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:38.198 10:31:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:38.198 10:31:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 500398 00:20:38.198 10:31:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:38.764 10:31:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:38.764 10:31:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 500398 00:20:38.764 10:31:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:39.023 Initializing NVMe Controllers 00:20:39.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.023 Controller IO queue size 128, less than required. 00:20:39.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:39.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:39.023 Initialization complete. Launching workers. 00:20:39.023 ======================================================== 00:20:39.023 Latency(us) 00:20:39.023 Device Information : IOPS MiB/s Average min max 00:20:39.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003320.71 1000158.30 1041370.58 00:20:39.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004792.45 1000181.21 1011873.85 00:20:39.023 ======================================================== 00:20:39.023 Total : 256.00 0.12 1004056.58 1000158.30 1041370.58 00:20:39.023 00:20:39.281 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 500398 00:20:39.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (500398) - No such process 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 500398 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.282 rmmod nvme_tcp 00:20:39.282 rmmod nvme_fabrics 00:20:39.282 rmmod nvme_keyring 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 499684 ']' 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 499684 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 499684 ']' 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 499684 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 499684 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 499684' 00:20:39.282 killing process with pid 499684 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 499684 00:20:39.282 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 499684 00:20:39.540 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.541 10:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.076 00:20:42.076 real 0m15.704s 00:20:42.076 user 0m29.407s 00:20:42.076 sys 0m5.040s 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:42.076 ************************************ 00:20:42.076 END TEST nvmf_delete_subsystem 00:20:42.076 ************************************ 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:20:42.076 ************************************ 00:20:42.076 START TEST nvmf_host_management 00:20:42.076 ************************************ 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:42.076 * Looking for test storage... 00:20:42.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.076 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.076 --rc genhtml_branch_coverage=1 00:20:42.076 --rc genhtml_function_coverage=1 00:20:42.076 --rc genhtml_legend=1 00:20:42.076 --rc geninfo_all_blocks=1 00:20:42.077 --rc geninfo_unexecuted_blocks=1 00:20:42.077 00:20:42.077 ' 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.077 --rc genhtml_branch_coverage=1 00:20:42.077 --rc genhtml_function_coverage=1 00:20:42.077 --rc genhtml_legend=1 00:20:42.077 --rc geninfo_all_blocks=1 00:20:42.077 --rc geninfo_unexecuted_blocks=1 00:20:42.077 00:20:42.077 ' 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.077 --rc genhtml_branch_coverage=1 00:20:42.077 --rc genhtml_function_coverage=1 00:20:42.077 --rc genhtml_legend=1 00:20:42.077 --rc geninfo_all_blocks=1 00:20:42.077 --rc geninfo_unexecuted_blocks=1 00:20:42.077 00:20:42.077 ' 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.077 --rc genhtml_branch_coverage=1 00:20:42.077 --rc genhtml_function_coverage=1 00:20:42.077 --rc genhtml_legend=1 00:20:42.077 --rc geninfo_all_blocks=1 00:20:42.077 --rc geninfo_unexecuted_blocks=1 00:20:42.077 00:20:42.077 ' 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.077 10:31:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:47.358 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:47.358 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:47.358 Found net devices under 0000:86:00.0: cvl_0_0 00:20:47.358 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:47.359 Found net devices under 0000:86:00.1: cvl_0_1 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.359 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:20:47.618 00:20:47.618 --- 10.0.0.2 ping statistics --- 00:20:47.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.618 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:20:47.618 00:20:47.618 --- 10.0.0.1 ping statistics --- 00:20:47.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.618 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=504584 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 504584 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 504584 ']' 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.618 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:47.618 [2024-12-09 10:31:48.648262] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:20:47.618 [2024-12-09 10:31:48.648306] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.618 [2024-12-09 10:31:48.717320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.618 [2024-12-09 10:31:48.760740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.618 [2024-12-09 10:31:48.760779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.618 [2024-12-09 10:31:48.760786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.618 [2024-12-09 10:31:48.760793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.618 [2024-12-09 10:31:48.760798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.618 [2024-12-09 10:31:48.762522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.618 [2024-12-09 10:31:48.762629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.618 [2024-12-09 10:31:48.762737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.618 [2024-12-09 10:31:48.762738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:47.877 [2024-12-09 10:31:48.905093] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.877 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:47.878 Malloc0 00:20:47.878 [2024-12-09 10:31:48.978472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.878 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=504674 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 504674 /var/tmp/bdevperf.sock 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 504674 ']' 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.878 { 00:20:47.878 "params": { 00:20:47.878 "name": "Nvme$subsystem", 00:20:47.878 "trtype": "$TEST_TRANSPORT", 00:20:47.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.878 "adrfam": "ipv4", 00:20:47.878 "trsvcid": "$NVMF_PORT", 00:20:47.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.878 "hdgst": ${hdgst:-false}, 00:20:47.878 "ddgst": ${ddgst:-false} 00:20:47.878 }, 00:20:47.878 "method": "bdev_nvme_attach_controller" 00:20:47.878 } 00:20:47.878 EOF 00:20:47.878 )") 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:20:47.878 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:47.878 "params": { 00:20:47.878 "name": "Nvme0", 00:20:47.878 "trtype": "tcp", 00:20:47.878 "traddr": "10.0.0.2", 00:20:47.878 "adrfam": "ipv4", 00:20:47.878 "trsvcid": "4420", 00:20:47.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:47.878 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:47.878 "hdgst": false, 00:20:47.878 "ddgst": false 00:20:47.878 }, 00:20:47.878 "method": "bdev_nvme_attach_controller" 00:20:47.878 }' 00:20:48.135 [2024-12-09 10:31:49.074502] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:20:48.135 [2024-12-09 10:31:49.074546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid504674 ] 00:20:48.135 [2024-12-09 10:31:49.142361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.135 [2024-12-09 10:31:49.183700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.394 Running I/O for 10 seconds... 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:20:48.394 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:20:48.654 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:20:48.654 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:48.654 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:48.655 [2024-12-09 10:31:49.752983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27b60 is same with the state(6) to be set 00:20:48.655 [2024-12-09 10:31:49.753037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27b60 is same with the state(6) to be set 00:20:48.655 [2024-12-09 10:31:49.753045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27b60 is same with the state(6) to be set 00:20:48.655 [2024-12-09 10:31:49.753052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27b60 is same with the state(6) to be set 00:20:48.655 [2024-12-09 10:31:49.753058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27b60 is same with the state(6) to be set 00:20:48.655 [2024-12-09 10:31:49.753065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27b60 is same with the state(6) to be set 00:20:48.655 [2024-12-09 10:31:49.753071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27b60 is same with the state(6) to be set 00:20:48.655 [2024-12-09 10:31:49.753077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27b60 is same with the state(6) to be set 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.655 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:48.655 [2024-12-09 10:31:49.761653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.655 [2024-12-09 10:31:49.761691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.655 [2024-12-09 10:31:49.761709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.655 [2024-12-09 10:31:49.761725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.655 [2024-12-09 10:31:49.761740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c120 is same with the state(6) to be set 00:20:48.655 [2024-12-09 10:31:49.761784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.761988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.761996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.655 [2024-12-09 10:31:49.762164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.655 [2024-12-09 10:31:49.762172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.656 [2024-12-09 10:31:49.762759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.656 [2024-12-09 10:31:49.762767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.657 [2024-12-09 10:31:49.762773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.657 [2024-12-09 10:31:49.763742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:48.657 task offset: 98304 on job bdev=Nvme0n1 fails 00:20:48.657 00:20:48.657 Latency(us) 00:20:48.657 [2024-12-09T09:31:49.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.657 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.657 Job: Nvme0n1 ended in about 0.42 seconds with error 00:20:48.657 Verification LBA range: start 0x0 length 0x400 00:20:48.657 Nvme0n1 : 0.42 1843.71 115.23 153.64 0.00 31198.32 1659.77 28151.99 00:20:48.657 [2024-12-09T09:31:49.833Z] =================================================================================================================== 00:20:48.657 [2024-12-09T09:31:49.833Z] Total : 1843.71 115.23 153.64 0.00 31198.32 1659.77 28151.99 00:20:48.657 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.657 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:20:48.657 [2024-12-09 10:31:49.766153] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:48.657 [2024-12-09 10:31:49.766176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5c120 (9): Bad file descriptor 00:20:48.657 [2024-12-09 10:31:49.773145] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 504674 00:20:50.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (504674) - No such process 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.033 { 00:20:50.033 "params": { 00:20:50.033 "name": "Nvme$subsystem", 00:20:50.033 "trtype": "$TEST_TRANSPORT", 00:20:50.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.033 "adrfam": "ipv4", 00:20:50.033 "trsvcid": "$NVMF_PORT", 00:20:50.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.033 "hdgst": ${hdgst:-false}, 00:20:50.033 "ddgst": ${ddgst:-false} 00:20:50.033 }, 00:20:50.033 "method": "bdev_nvme_attach_controller" 00:20:50.033 } 00:20:50.033 EOF 00:20:50.033 )") 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:20:50.033 10:31:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:50.033 "params": { 00:20:50.033 "name": "Nvme0", 00:20:50.033 "trtype": "tcp", 00:20:50.033 "traddr": "10.0.0.2", 00:20:50.033 "adrfam": "ipv4", 00:20:50.033 "trsvcid": "4420", 00:20:50.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:50.033 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:50.033 "hdgst": false, 00:20:50.033 "ddgst": false 00:20:50.033 }, 00:20:50.033 "method": "bdev_nvme_attach_controller" 00:20:50.033 }' 00:20:50.033 [2024-12-09 10:31:50.820205] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:20:50.033 [2024-12-09 10:31:50.820253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid504933 ] 00:20:50.033 [2024-12-09 10:31:50.885536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.033 [2024-12-09 10:31:50.926954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.033 Running I/O for 1 seconds... 00:20:51.410 1728.00 IOPS, 108.00 MiB/s 00:20:51.410 Latency(us) 00:20:51.410 [2024-12-09T09:31:52.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.410 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.410 Verification LBA range: start 0x0 length 0x400 00:20:51.410 Nvme0n1 : 1.03 1745.00 109.06 0.00 0.00 36088.68 6667.58 28151.99 00:20:51.410 [2024-12-09T09:31:52.586Z] =================================================================================================================== 00:20:51.410 [2024-12-09T09:31:52.586Z] Total : 1745.00 109.06 0.00 0.00 36088.68 6667.58 28151.99 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:51.410 rmmod nvme_tcp 00:20:51.410 rmmod nvme_fabrics 00:20:51.410 rmmod nvme_keyring 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 504584 ']' 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 504584 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 504584 ']' 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 504584 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 504584 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 504584' 00:20:51.410 killing process with pid 504584 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 504584 00:20:51.410 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 504584 00:20:51.669 [2024-12-09 10:31:52.717947] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.669 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:54.206 00:20:54.206 real 0m12.040s 00:20:54.206 user 0m19.675s 00:20:54.206 sys 0m5.245s 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:54.206 ************************************ 00:20:54.206 END TEST nvmf_host_management 00:20:54.206 ************************************ 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:20:54.206 ************************************ 00:20:54.206 START TEST nvmf_lvol 00:20:54.206 ************************************ 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:54.206 * Looking for test storage... 00:20:54.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:20:54.206 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:54.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.206 --rc genhtml_branch_coverage=1 00:20:54.206 --rc genhtml_function_coverage=1 00:20:54.206 --rc genhtml_legend=1 00:20:54.206 --rc geninfo_all_blocks=1 00:20:54.206 --rc geninfo_unexecuted_blocks=1 00:20:54.206 00:20:54.206 ' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:54.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.206 --rc genhtml_branch_coverage=1 00:20:54.206 --rc genhtml_function_coverage=1 00:20:54.206 --rc genhtml_legend=1 00:20:54.206 --rc geninfo_all_blocks=1 00:20:54.206 --rc geninfo_unexecuted_blocks=1 00:20:54.206 00:20:54.206 ' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:54.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.206 --rc genhtml_branch_coverage=1 00:20:54.206 --rc genhtml_function_coverage=1 00:20:54.206 --rc genhtml_legend=1 00:20:54.206 --rc geninfo_all_blocks=1 00:20:54.206 --rc geninfo_unexecuted_blocks=1 00:20:54.206 00:20:54.206 ' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:54.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.206 --rc genhtml_branch_coverage=1 00:20:54.206 --rc genhtml_function_coverage=1 00:20:54.206 --rc genhtml_legend=1 00:20:54.206 --rc geninfo_all_blocks=1 00:20:54.206 --rc geninfo_unexecuted_blocks=1 00:20:54.206 00:20:54.206 ' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:54.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:20:54.206 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:59.477 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:59.477 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.477 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:59.478 Found net devices under 0000:86:00.0: cvl_0_0 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:59.478 Found net devices under 0000:86:00.1: cvl_0_1 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:59.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:20:59.478 00:20:59.478 --- 10.0.0.2 ping statistics --- 00:20:59.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.478 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:20:59.478 00:20:59.478 --- 10.0.0.1 ping statistics --- 00:20:59.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.478 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:59.478 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=508703 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 508703 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 508703 ']' 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.739 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:59.739 [2024-12-09 10:32:00.711303] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:20:59.739 [2024-12-09 10:32:00.711351] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.739 [2024-12-09 10:32:00.785121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:59.739 [2024-12-09 10:32:00.829150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.739 [2024-12-09 10:32:00.829182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.739 [2024-12-09 10:32:00.829189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.739 [2024-12-09 10:32:00.829196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.739 [2024-12-09 10:32:00.829201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.739 [2024-12-09 10:32:00.830432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.739 [2024-12-09 10:32:00.830449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.739 [2024-12-09 10:32:00.830451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.999 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.999 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:20:59.999 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.999 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.999 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:59.999 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.999 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:59.999 [2024-12-09 10:32:01.140809] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.999 10:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:00.259 10:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:21:00.259 10:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:00.518 10:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:21:00.518 10:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:21:00.777 10:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:21:01.036 10:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a00eed98-45e2-4cde-9b6d-f75cecfa3abe 00:21:01.036 10:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a00eed98-45e2-4cde-9b6d-f75cecfa3abe lvol 20 00:21:01.036 10:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=421ede4a-30c6-487f-9c7f-d19f2479d88c 00:21:01.036 10:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:01.295 10:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 421ede4a-30c6-487f-9c7f-d19f2479d88c 00:21:01.554 10:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:01.813 [2024-12-09 10:32:02.764720] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.813 10:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:02.072 10:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=509193 00:21:02.072 10:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:21:02.072 10:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:21:03.009 10:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 421ede4a-30c6-487f-9c7f-d19f2479d88c MY_SNAPSHOT 00:21:03.269 10:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2b552cd6-8fe1-4e49-9e04-8a9709f2aded 00:21:03.269 10:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 421ede4a-30c6-487f-9c7f-d19f2479d88c 30 00:21:03.528 10:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2b552cd6-8fe1-4e49-9e04-8a9709f2aded MY_CLONE 00:21:03.788 10:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3accb6e5-8c35-4951-a6d1-a0586134132a 00:21:03.788 10:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3accb6e5-8c35-4951-a6d1-a0586134132a 00:21:04.355 10:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 509193 00:21:12.474 Initializing NVMe Controllers 00:21:12.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:21:12.474 Controller IO queue size 128, less than required. 00:21:12.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:21:12.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:21:12.475 Initialization complete. Launching workers. 00:21:12.475 ======================================================== 00:21:12.475 Latency(us) 00:21:12.475 Device Information : IOPS MiB/s Average min max 00:21:12.475 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11705.90 45.73 10937.30 1641.54 50816.95 00:21:12.475 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11812.40 46.14 10840.64 1123.38 75735.41 00:21:12.475 ======================================================== 00:21:12.475 Total : 23518.29 91.87 10888.75 1123.38 75735.41 00:21:12.475 00:21:12.475 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:12.475 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 421ede4a-30c6-487f-9c7f-d19f2479d88c 00:21:12.734 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a00eed98-45e2-4cde-9b6d-f75cecfa3abe 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.993 rmmod nvme_tcp 00:21:12.993 rmmod nvme_fabrics 00:21:12.993 rmmod nvme_keyring 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 508703 ']' 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 508703 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 508703 ']' 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 508703 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 508703 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 508703' 00:21:12.993 killing process with pid 508703 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 508703 00:21:12.993 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 508703 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.252 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:15.790 00:21:15.790 real 0m21.573s 00:21:15.790 user 1m3.104s 00:21:15.790 sys 0m7.277s 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:21:15.790 ************************************ 00:21:15.790 END TEST nvmf_lvol 00:21:15.790 ************************************ 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:21:15.790 ************************************ 00:21:15.790 START TEST nvmf_lvs_grow 00:21:15.790 ************************************ 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:15.790 * Looking for test storage... 00:21:15.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.790 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:15.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.791 --rc genhtml_branch_coverage=1 00:21:15.791 --rc genhtml_function_coverage=1 00:21:15.791 --rc genhtml_legend=1 00:21:15.791 --rc geninfo_all_blocks=1 00:21:15.791 --rc geninfo_unexecuted_blocks=1 00:21:15.791 00:21:15.791 ' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:15.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.791 --rc genhtml_branch_coverage=1 00:21:15.791 --rc genhtml_function_coverage=1 00:21:15.791 --rc genhtml_legend=1 00:21:15.791 --rc geninfo_all_blocks=1 00:21:15.791 --rc geninfo_unexecuted_blocks=1 00:21:15.791 00:21:15.791 ' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:15.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.791 --rc genhtml_branch_coverage=1 00:21:15.791 --rc genhtml_function_coverage=1 00:21:15.791 --rc genhtml_legend=1 00:21:15.791 --rc geninfo_all_blocks=1 00:21:15.791 --rc geninfo_unexecuted_blocks=1 00:21:15.791 00:21:15.791 ' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:15.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.791 --rc genhtml_branch_coverage=1 00:21:15.791 --rc genhtml_function_coverage=1 00:21:15.791 --rc genhtml_legend=1 00:21:15.791 --rc geninfo_all_blocks=1 00:21:15.791 --rc geninfo_unexecuted_blocks=1 00:21:15.791 00:21:15.791 ' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:15.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:21:15.791 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:21.068 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.068 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:21.068 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:21.069 Found net devices under 0000:86:00.0: cvl_0_0 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:21.069 Found net devices under 0000:86:00.1: cvl_0_1 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.069 10:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:21.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:21:21.069 00:21:21.069 --- 10.0.0.2 ping statistics --- 00:21:21.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.069 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:21:21.069 00:21:21.069 --- 10.0.0.1 ping statistics --- 00:21:21.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.069 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:21.069 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=514572 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 514572 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 514572 ']' 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.329 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:21.329 [2024-12-09 10:32:22.313657] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:21:21.329 [2024-12-09 10:32:22.313701] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.329 [2024-12-09 10:32:22.382284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.329 [2024-12-09 10:32:22.423420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.329 [2024-12-09 10:32:22.423457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.329 [2024-12-09 10:32:22.423464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.329 [2024-12-09 10:32:22.423470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.329 [2024-12-09 10:32:22.423475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.329 [2024-12-09 10:32:22.424014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:21.588 [2024-12-09 10:32:22.721104] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.588 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:21.846 ************************************ 00:21:21.846 START TEST lvs_grow_clean 00:21:21.846 ************************************ 00:21:21.846 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:21:21.846 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:21.846 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:21.846 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:21.846 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:21.846 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:21.846 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:21.846 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:21.846 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:21.846 10:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:21.846 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:22.105 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:22.105 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:22.105 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:22.105 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:22.365 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:22.365 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:22.365 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 lvol 150 00:21:22.624 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=91e23958-e029-422e-ab09-09f99535e17a 00:21:22.624 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:22.624 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:22.624 [2024-12-09 10:32:23.751814] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:22.624 [2024-12-09 10:32:23.751880] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:22.624 true 00:21:22.624 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:22.624 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:22.882 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:22.882 10:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:23.140 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 91e23958-e029-422e-ab09-09f99535e17a 00:21:23.398 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:23.398 [2024-12-09 10:32:24.477992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.398 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:23.657 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=515073 00:21:23.657 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:23.657 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:23.657 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 515073 /var/tmp/bdevperf.sock 00:21:23.657 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 515073 ']' 00:21:23.657 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.657 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.657 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.657 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.657 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:21:23.657 [2024-12-09 10:32:24.720328] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:21:23.657 [2024-12-09 10:32:24.720376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515073 ] 00:21:23.657 [2024-12-09 10:32:24.785051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.657 [2024-12-09 10:32:24.827208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.914 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.914 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:21:23.914 10:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:24.172 Nvme0n1 00:21:24.172 10:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:24.431 [ 00:21:24.431 { 00:21:24.431 "name": "Nvme0n1", 00:21:24.431 "aliases": [ 00:21:24.431 "91e23958-e029-422e-ab09-09f99535e17a" 00:21:24.431 ], 00:21:24.431 "product_name": "NVMe disk", 00:21:24.431 "block_size": 4096, 00:21:24.431 "num_blocks": 38912, 00:21:24.431 "uuid": "91e23958-e029-422e-ab09-09f99535e17a", 00:21:24.431 "numa_id": 1, 00:21:24.431 "assigned_rate_limits": { 00:21:24.431 "rw_ios_per_sec": 0, 00:21:24.431 "rw_mbytes_per_sec": 0, 00:21:24.431 "r_mbytes_per_sec": 0, 00:21:24.431 "w_mbytes_per_sec": 0 00:21:24.431 }, 00:21:24.431 "claimed": false, 00:21:24.431 "zoned": false, 00:21:24.431 "supported_io_types": { 00:21:24.431 "read": true, 00:21:24.431 "write": true, 00:21:24.431 "unmap": true, 00:21:24.431 "flush": true, 00:21:24.431 "reset": true, 00:21:24.431 "nvme_admin": true, 00:21:24.431 "nvme_io": true, 00:21:24.431 "nvme_io_md": false, 00:21:24.431 "write_zeroes": true, 00:21:24.431 "zcopy": false, 00:21:24.431 "get_zone_info": false, 00:21:24.431 "zone_management": false, 00:21:24.431 "zone_append": false, 00:21:24.431 "compare": true, 00:21:24.431 "compare_and_write": true, 00:21:24.431 "abort": true, 00:21:24.431 "seek_hole": false, 00:21:24.431 "seek_data": false, 00:21:24.431 "copy": true, 00:21:24.431 "nvme_iov_md": false 00:21:24.431 }, 00:21:24.431 "memory_domains": [ 00:21:24.431 { 00:21:24.431 "dma_device_id": "system", 00:21:24.431 "dma_device_type": 1 00:21:24.431 } 00:21:24.431 ], 00:21:24.431 "driver_specific": { 00:21:24.431 "nvme": [ 00:21:24.431 { 00:21:24.431 "trid": { 00:21:24.431 "trtype": "TCP", 00:21:24.431 "adrfam": "IPv4", 00:21:24.431 "traddr": "10.0.0.2", 00:21:24.431 "trsvcid": "4420", 00:21:24.431 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:24.431 }, 00:21:24.431 "ctrlr_data": { 00:21:24.431 "cntlid": 1, 00:21:24.431 "vendor_id": "0x8086", 00:21:24.431 "model_number": "SPDK bdev Controller", 00:21:24.431 "serial_number": "SPDK0", 00:21:24.431 "firmware_revision": "25.01", 00:21:24.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:24.431 "oacs": { 00:21:24.431 "security": 0, 00:21:24.431 "format": 0, 00:21:24.431 "firmware": 0, 00:21:24.431 "ns_manage": 0 00:21:24.431 }, 00:21:24.431 "multi_ctrlr": true, 00:21:24.431 "ana_reporting": false 00:21:24.431 }, 00:21:24.431 "vs": { 00:21:24.431 "nvme_version": "1.3" 00:21:24.431 }, 00:21:24.431 "ns_data": { 00:21:24.431 "id": 1, 00:21:24.431 "can_share": true 00:21:24.431 } 00:21:24.431 } 00:21:24.431 ], 00:21:24.431 "mp_policy": "active_passive" 00:21:24.431 } 00:21:24.431 } 00:21:24.431 ] 00:21:24.431 10:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=515090 00:21:24.431 10:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:24.431 10:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.698 Running I/O for 10 seconds... 00:21:25.634 Latency(us) 00:21:25.634 [2024-12-09T09:32:26.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:25.634 Nvme0n1 : 1.00 22811.00 89.11 0.00 0.00 0.00 0.00 0.00 00:21:25.634 [2024-12-09T09:32:26.810Z] =================================================================================================================== 00:21:25.634 [2024-12-09T09:32:26.810Z] Total : 22811.00 89.11 0.00 0.00 0.00 0.00 0.00 00:21:25.634 00:21:26.569 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:26.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:26.569 Nvme0n1 : 2.00 22971.50 89.73 0.00 0.00 0.00 0.00 0.00 00:21:26.569 [2024-12-09T09:32:27.745Z] =================================================================================================================== 00:21:26.569 [2024-12-09T09:32:27.745Z] Total : 22971.50 89.73 0.00 0.00 0.00 0.00 0.00 00:21:26.569 00:21:26.836 true 00:21:26.836 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:26.836 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:26.836 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:26.836 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:26.836 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 515090 00:21:27.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:27.776 Nvme0n1 : 3.00 22991.00 89.81 0.00 0.00 0.00 0.00 0.00 00:21:27.776 [2024-12-09T09:32:28.952Z] =================================================================================================================== 00:21:27.776 [2024-12-09T09:32:28.952Z] Total : 22991.00 89.81 0.00 0.00 0.00 0.00 0.00 00:21:27.776 00:21:28.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:28.714 Nvme0n1 : 4.00 23088.50 90.19 0.00 0.00 0.00 0.00 0.00 00:21:28.714 [2024-12-09T09:32:29.890Z] =================================================================================================================== 00:21:28.714 [2024-12-09T09:32:29.890Z] Total : 23088.50 90.19 0.00 0.00 0.00 0.00 0.00 00:21:28.714 00:21:29.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:29.652 Nvme0n1 : 5.00 23094.60 90.21 0.00 0.00 0.00 0.00 0.00 00:21:29.652 [2024-12-09T09:32:30.828Z] =================================================================================================================== 00:21:29.652 [2024-12-09T09:32:30.828Z] Total : 23094.60 90.21 0.00 0.00 0.00 0.00 0.00 00:21:29.652 00:21:30.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:30.589 Nvme0n1 : 6.00 23110.33 90.27 0.00 0.00 0.00 0.00 0.00 00:21:30.589 [2024-12-09T09:32:31.765Z] =================================================================================================================== 00:21:30.589 [2024-12-09T09:32:31.765Z] Total : 23110.33 90.27 0.00 0.00 0.00 0.00 0.00 00:21:30.589 00:21:31.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:31.546 Nvme0n1 : 7.00 23108.71 90.27 0.00 0.00 0.00 0.00 0.00 00:21:31.546 [2024-12-09T09:32:32.722Z] =================================================================================================================== 00:21:31.546 [2024-12-09T09:32:32.722Z] Total : 23108.71 90.27 0.00 0.00 0.00 0.00 0.00 00:21:31.546 00:21:32.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:32.546 Nvme0n1 : 8.00 23136.25 90.38 0.00 0.00 0.00 0.00 0.00 00:21:32.546 [2024-12-09T09:32:33.722Z] =================================================================================================================== 00:21:32.546 [2024-12-09T09:32:33.722Z] Total : 23136.25 90.38 0.00 0.00 0.00 0.00 0.00 00:21:32.546 00:21:33.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:33.482 Nvme0n1 : 9.00 23160.78 90.47 0.00 0.00 0.00 0.00 0.00 00:21:33.482 [2024-12-09T09:32:34.658Z] =================================================================================================================== 00:21:33.482 [2024-12-09T09:32:34.658Z] Total : 23160.78 90.47 0.00 0.00 0.00 0.00 0.00 00:21:33.482 00:21:34.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:34.858 Nvme0n1 : 10.00 23175.60 90.53 0.00 0.00 0.00 0.00 0.00 00:21:34.858 [2024-12-09T09:32:36.034Z] =================================================================================================================== 00:21:34.858 [2024-12-09T09:32:36.034Z] Total : 23175.60 90.53 0.00 0.00 0.00 0.00 0.00 00:21:34.858 00:21:34.858 00:21:34.858 Latency(us) 00:21:34.858 [2024-12-09T09:32:36.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:34.858 Nvme0n1 : 10.00 23177.81 90.54 0.00 0.00 5519.04 3219.81 11454.55 00:21:34.858 [2024-12-09T09:32:36.034Z] =================================================================================================================== 00:21:34.858 [2024-12-09T09:32:36.034Z] Total : 23177.81 90.54 0.00 0.00 5519.04 3219.81 11454.55 00:21:34.858 { 00:21:34.858 "results": [ 00:21:34.858 { 00:21:34.858 "job": "Nvme0n1", 00:21:34.858 "core_mask": "0x2", 00:21:34.858 "workload": "randwrite", 00:21:34.858 "status": "finished", 00:21:34.858 "queue_depth": 128, 00:21:34.858 "io_size": 4096, 00:21:34.858 "runtime": 10.004567, 00:21:34.858 "iops": 23177.81469203015, 00:21:34.858 "mibps": 90.53833864074278, 00:21:34.858 "io_failed": 0, 00:21:34.858 "io_timeout": 0, 00:21:34.858 "avg_latency_us": 5519.044881841221, 00:21:34.858 "min_latency_us": 3219.8121739130434, 00:21:34.858 "max_latency_us": 11454.553043478261 00:21:34.858 } 00:21:34.858 ], 00:21:34.858 "core_count": 1 00:21:34.858 } 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 515073 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 515073 ']' 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 515073 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 515073 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 515073' 00:21:34.858 killing process with pid 515073 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 515073 00:21:34.858 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.858 00:21:34.858 Latency(us) 00:21:34.858 [2024-12-09T09:32:36.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.858 [2024-12-09T09:32:36.034Z] =================================================================================================================== 00:21:34.858 [2024-12-09T09:32:36.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 515073 00:21:34.858 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:35.117 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:35.376 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:35.376 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:21:35.376 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:21:35.376 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:21:35.376 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:35.635 [2024-12-09 10:32:36.680618] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:35.635 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:35.894 request: 00:21:35.894 { 00:21:35.895 "uuid": "1ad03e07-fc23-4f1c-ad41-e68828b82e74", 00:21:35.895 "method": "bdev_lvol_get_lvstores", 00:21:35.895 "req_id": 1 00:21:35.895 } 00:21:35.895 Got JSON-RPC error response 00:21:35.895 response: 00:21:35.895 { 00:21:35.895 "code": -19, 00:21:35.895 "message": "No such device" 00:21:35.895 } 00:21:35.895 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:21:35.895 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.895 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.895 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.895 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:36.163 aio_bdev 00:21:36.163 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 91e23958-e029-422e-ab09-09f99535e17a 00:21:36.163 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=91e23958-e029-422e-ab09-09f99535e17a 00:21:36.163 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:36.163 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:21:36.163 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:36.163 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:36.163 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:36.163 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 91e23958-e029-422e-ab09-09f99535e17a -t 2000 00:21:36.421 [ 00:21:36.421 { 00:21:36.421 "name": "91e23958-e029-422e-ab09-09f99535e17a", 00:21:36.421 "aliases": [ 00:21:36.421 "lvs/lvol" 00:21:36.421 ], 00:21:36.421 "product_name": "Logical Volume", 00:21:36.421 "block_size": 4096, 00:21:36.421 "num_blocks": 38912, 00:21:36.421 "uuid": "91e23958-e029-422e-ab09-09f99535e17a", 00:21:36.421 "assigned_rate_limits": { 00:21:36.421 "rw_ios_per_sec": 0, 00:21:36.421 "rw_mbytes_per_sec": 0, 00:21:36.421 "r_mbytes_per_sec": 0, 00:21:36.421 "w_mbytes_per_sec": 0 00:21:36.421 }, 00:21:36.421 "claimed": false, 00:21:36.421 "zoned": false, 00:21:36.421 "supported_io_types": { 00:21:36.421 "read": true, 00:21:36.421 "write": true, 00:21:36.421 "unmap": true, 00:21:36.421 "flush": false, 00:21:36.421 "reset": true, 00:21:36.421 "nvme_admin": false, 00:21:36.421 "nvme_io": false, 00:21:36.421 "nvme_io_md": false, 00:21:36.421 "write_zeroes": true, 00:21:36.421 "zcopy": false, 00:21:36.421 "get_zone_info": false, 00:21:36.421 "zone_management": false, 00:21:36.421 "zone_append": false, 00:21:36.421 "compare": false, 00:21:36.421 "compare_and_write": false, 00:21:36.421 "abort": false, 00:21:36.421 "seek_hole": true, 00:21:36.421 "seek_data": true, 00:21:36.421 "copy": false, 00:21:36.422 "nvme_iov_md": false 00:21:36.422 }, 00:21:36.422 "driver_specific": { 00:21:36.422 "lvol": { 00:21:36.422 "lvol_store_uuid": "1ad03e07-fc23-4f1c-ad41-e68828b82e74", 00:21:36.422 "base_bdev": "aio_bdev", 00:21:36.422 "thin_provision": false, 00:21:36.422 "num_allocated_clusters": 38, 00:21:36.422 "snapshot": false, 00:21:36.422 "clone": false, 00:21:36.422 "esnap_clone": false 00:21:36.422 } 00:21:36.422 } 00:21:36.422 } 00:21:36.422 ] 00:21:36.422 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:21:36.422 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:36.422 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:21:36.681 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:21:36.681 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:36.681 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:21:36.681 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:21:36.681 10:32:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 91e23958-e029-422e-ab09-09f99535e17a 00:21:36.941 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ad03e07-fc23-4f1c-ad41-e68828b82e74 00:21:37.201 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:37.461 00:21:37.461 real 0m15.648s 00:21:37.461 user 0m15.179s 00:21:37.461 sys 0m1.475s 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:21:37.461 ************************************ 00:21:37.461 END TEST lvs_grow_clean 00:21:37.461 ************************************ 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:37.461 ************************************ 00:21:37.461 START TEST lvs_grow_dirty 00:21:37.461 ************************************ 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:37.461 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:37.721 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:37.721 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:37.981 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:37.981 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:37.981 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:37.981 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:37.981 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:37.981 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db lvol 150 00:21:38.240 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=56b563e5-af30-4d2b-b4e4-6dcbb81438da 00:21:38.240 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:38.240 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:38.500 [2024-12-09 10:32:39.429839] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:38.500 [2024-12-09 10:32:39.429886] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:38.500 true 00:21:38.500 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:38.500 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:38.500 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:38.500 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:38.760 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 56b563e5-af30-4d2b-b4e4-6dcbb81438da 00:21:39.019 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:39.019 [2024-12-09 10:32:40.172072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.019 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:39.279 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:39.279 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=517684 00:21:39.279 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.279 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 517684 /var/tmp/bdevperf.sock 00:21:39.279 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 517684 ']' 00:21:39.279 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.279 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.279 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.279 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.279 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:39.279 [2024-12-09 10:32:40.407944] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:21:39.279 [2024-12-09 10:32:40.407991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517684 ] 00:21:39.538 [2024-12-09 10:32:40.472831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.538 [2024-12-09 10:32:40.513417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.538 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.538 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:21:39.538 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:39.797 Nvme0n1 00:21:39.797 10:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:40.057 [ 00:21:40.057 { 00:21:40.057 "name": "Nvme0n1", 00:21:40.057 "aliases": [ 00:21:40.057 "56b563e5-af30-4d2b-b4e4-6dcbb81438da" 00:21:40.057 ], 00:21:40.057 "product_name": "NVMe disk", 00:21:40.057 "block_size": 4096, 00:21:40.057 "num_blocks": 38912, 00:21:40.057 "uuid": "56b563e5-af30-4d2b-b4e4-6dcbb81438da", 00:21:40.057 "numa_id": 1, 00:21:40.057 "assigned_rate_limits": { 00:21:40.057 "rw_ios_per_sec": 0, 00:21:40.057 "rw_mbytes_per_sec": 0, 00:21:40.057 "r_mbytes_per_sec": 0, 00:21:40.057 "w_mbytes_per_sec": 0 00:21:40.057 }, 00:21:40.057 "claimed": false, 00:21:40.057 "zoned": false, 00:21:40.057 "supported_io_types": { 00:21:40.057 "read": true, 00:21:40.057 "write": true, 00:21:40.057 "unmap": true, 00:21:40.057 "flush": true, 00:21:40.057 "reset": true, 00:21:40.057 "nvme_admin": true, 00:21:40.057 "nvme_io": true, 00:21:40.057 "nvme_io_md": false, 00:21:40.057 "write_zeroes": true, 00:21:40.057 "zcopy": false, 00:21:40.057 "get_zone_info": false, 00:21:40.057 "zone_management": false, 00:21:40.057 "zone_append": false, 00:21:40.057 "compare": true, 00:21:40.057 "compare_and_write": true, 00:21:40.057 "abort": true, 00:21:40.057 "seek_hole": false, 00:21:40.057 "seek_data": false, 00:21:40.057 "copy": true, 00:21:40.057 "nvme_iov_md": false 00:21:40.057 }, 00:21:40.057 "memory_domains": [ 00:21:40.057 { 00:21:40.057 "dma_device_id": "system", 00:21:40.057 "dma_device_type": 1 00:21:40.057 } 00:21:40.057 ], 00:21:40.057 "driver_specific": { 00:21:40.057 "nvme": [ 00:21:40.057 { 00:21:40.057 "trid": { 00:21:40.057 "trtype": "TCP", 00:21:40.057 "adrfam": "IPv4", 00:21:40.057 "traddr": "10.0.0.2", 00:21:40.057 "trsvcid": "4420", 00:21:40.057 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:40.057 }, 00:21:40.057 "ctrlr_data": { 00:21:40.057 "cntlid": 1, 00:21:40.057 "vendor_id": "0x8086", 00:21:40.057 "model_number": "SPDK bdev Controller", 00:21:40.057 "serial_number": "SPDK0", 00:21:40.057 "firmware_revision": "25.01", 00:21:40.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:40.057 "oacs": { 00:21:40.057 "security": 0, 00:21:40.057 "format": 0, 00:21:40.057 "firmware": 0, 00:21:40.057 "ns_manage": 0 00:21:40.057 }, 00:21:40.057 "multi_ctrlr": true, 00:21:40.057 "ana_reporting": false 00:21:40.057 }, 00:21:40.057 "vs": { 00:21:40.057 "nvme_version": "1.3" 00:21:40.057 }, 00:21:40.057 "ns_data": { 00:21:40.057 "id": 1, 00:21:40.057 "can_share": true 00:21:40.057 } 00:21:40.057 } 00:21:40.057 ], 00:21:40.057 "mp_policy": "active_passive" 00:21:40.057 } 00:21:40.057 } 00:21:40.057 ] 00:21:40.057 10:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=517810 00:21:40.058 10:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.058 10:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:40.058 Running I/O for 10 seconds... 00:21:41.438 Latency(us) 00:21:41.438 [2024-12-09T09:32:42.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:41.438 Nvme0n1 : 1.00 22779.00 88.98 0.00 0.00 0.00 0.00 0.00 00:21:41.438 [2024-12-09T09:32:42.614Z] =================================================================================================================== 00:21:41.438 [2024-12-09T09:32:42.614Z] Total : 22779.00 88.98 0.00 0.00 0.00 0.00 0.00 00:21:41.438 00:21:42.005 10:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:42.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:42.005 Nvme0n1 : 2.00 22985.00 89.79 0.00 0.00 0.00 0.00 0.00 00:21:42.005 [2024-12-09T09:32:43.181Z] =================================================================================================================== 00:21:42.005 [2024-12-09T09:32:43.181Z] Total : 22985.00 89.79 0.00 0.00 0.00 0.00 0.00 00:21:42.005 00:21:42.263 true 00:21:42.263 10:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:42.263 10:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:42.522 10:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:42.522 10:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:42.522 10:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 517810 00:21:43.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:43.088 Nvme0n1 : 3.00 23042.67 90.01 0.00 0.00 0.00 0.00 0.00 00:21:43.088 [2024-12-09T09:32:44.264Z] =================================================================================================================== 00:21:43.088 [2024-12-09T09:32:44.264Z] Total : 23042.67 90.01 0.00 0.00 0.00 0.00 0.00 00:21:43.088 00:21:44.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:44.025 Nvme0n1 : 4.00 23060.25 90.08 0.00 0.00 0.00 0.00 0.00 00:21:44.025 [2024-12-09T09:32:45.201Z] =================================================================================================================== 00:21:44.025 [2024-12-09T09:32:45.201Z] Total : 23060.25 90.08 0.00 0.00 0.00 0.00 0.00 00:21:44.025 00:21:45.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:45.400 Nvme0n1 : 5.00 23114.20 90.29 0.00 0.00 0.00 0.00 0.00 00:21:45.400 [2024-12-09T09:32:46.576Z] =================================================================================================================== 00:21:45.400 [2024-12-09T09:32:46.576Z] Total : 23114.20 90.29 0.00 0.00 0.00 0.00 0.00 00:21:45.400 00:21:46.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:46.338 Nvme0n1 : 6.00 23148.50 90.42 0.00 0.00 0.00 0.00 0.00 00:21:46.338 [2024-12-09T09:32:47.514Z] =================================================================================================================== 00:21:46.338 [2024-12-09T09:32:47.514Z] Total : 23148.50 90.42 0.00 0.00 0.00 0.00 0.00 00:21:46.338 00:21:47.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:47.276 Nvme0n1 : 7.00 23180.29 90.55 0.00 0.00 0.00 0.00 0.00 00:21:47.276 [2024-12-09T09:32:48.452Z] =================================================================================================================== 00:21:47.276 [2024-12-09T09:32:48.452Z] Total : 23180.29 90.55 0.00 0.00 0.00 0.00 0.00 00:21:47.276 00:21:48.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:48.214 Nvme0n1 : 8.00 23200.50 90.63 0.00 0.00 0.00 0.00 0.00 00:21:48.214 [2024-12-09T09:32:49.390Z] =================================================================================================================== 00:21:48.214 [2024-12-09T09:32:49.390Z] Total : 23200.50 90.63 0.00 0.00 0.00 0.00 0.00 00:21:48.214 00:21:49.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:49.152 Nvme0n1 : 9.00 23193.44 90.60 0.00 0.00 0.00 0.00 0.00 00:21:49.152 [2024-12-09T09:32:50.328Z] =================================================================================================================== 00:21:49.152 [2024-12-09T09:32:50.328Z] Total : 23193.44 90.60 0.00 0.00 0.00 0.00 0.00 00:21:49.152 00:21:50.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:50.088 Nvme0n1 : 10.00 23179.40 90.54 0.00 0.00 0.00 0.00 0.00 00:21:50.088 [2024-12-09T09:32:51.264Z] =================================================================================================================== 00:21:50.088 [2024-12-09T09:32:51.264Z] Total : 23179.40 90.54 0.00 0.00 0.00 0.00 0.00 00:21:50.088 00:21:50.088 00:21:50.088 Latency(us) 00:21:50.088 [2024-12-09T09:32:51.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:50.088 Nvme0n1 : 10.00 23182.20 90.56 0.00 0.00 5518.49 3333.79 13734.07 00:21:50.088 [2024-12-09T09:32:51.264Z] =================================================================================================================== 00:21:50.088 [2024-12-09T09:32:51.264Z] Total : 23182.20 90.56 0.00 0.00 5518.49 3333.79 13734.07 00:21:50.088 { 00:21:50.088 "results": [ 00:21:50.088 { 00:21:50.088 "job": "Nvme0n1", 00:21:50.088 "core_mask": "0x2", 00:21:50.088 "workload": "randwrite", 00:21:50.088 "status": "finished", 00:21:50.088 "queue_depth": 128, 00:21:50.088 "io_size": 4096, 00:21:50.088 "runtime": 10.004314, 00:21:50.088 "iops": 23182.199199265437, 00:21:50.088 "mibps": 90.55546562213061, 00:21:50.088 "io_failed": 0, 00:21:50.088 "io_timeout": 0, 00:21:50.088 "avg_latency_us": 5518.486293360249, 00:21:50.088 "min_latency_us": 3333.7878260869566, 00:21:50.088 "max_latency_us": 13734.066086956522 00:21:50.088 } 00:21:50.088 ], 00:21:50.088 "core_count": 1 00:21:50.088 } 00:21:50.088 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 517684 00:21:50.088 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 517684 ']' 00:21:50.088 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 517684 00:21:50.088 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:21:50.088 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.088 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 517684 00:21:50.347 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:50.347 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:50.347 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 517684' 00:21:50.347 killing process with pid 517684 00:21:50.347 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 517684 00:21:50.347 Received shutdown signal, test time was about 10.000000 seconds 00:21:50.347 00:21:50.347 Latency(us) 00:21:50.347 [2024-12-09T09:32:51.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.347 [2024-12-09T09:32:51.523Z] =================================================================================================================== 00:21:50.347 [2024-12-09T09:32:51.523Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.347 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 517684 00:21:50.347 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:50.605 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:50.864 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:50.864 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 514572 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 514572 00:21:51.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 514572 Killed "${NVMF_APP[@]}" "$@" 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=519585 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 519585 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 519585 ']' 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.122 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:51.122 [2024-12-09 10:32:52.152699] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:21:51.122 [2024-12-09 10:32:52.152749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.122 [2024-12-09 10:32:52.222212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.122 [2024-12-09 10:32:52.263484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.122 [2024-12-09 10:32:52.263519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.122 [2024-12-09 10:32:52.263526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.122 [2024-12-09 10:32:52.263532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.122 [2024-12-09 10:32:52.263537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.122 [2024-12-09 10:32:52.264080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.381 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.381 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:21:51.381 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:51.381 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.381 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:51.381 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.381 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:51.640 [2024-12-09 10:32:52.567636] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:51.640 [2024-12-09 10:32:52.567729] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:51.640 [2024-12-09 10:32:52.567755] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:51.640 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:21:51.640 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 56b563e5-af30-4d2b-b4e4-6dcbb81438da 00:21:51.640 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=56b563e5-af30-4d2b-b4e4-6dcbb81438da 00:21:51.640 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:51.640 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:21:51.640 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:51.640 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:51.640 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:51.640 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 56b563e5-af30-4d2b-b4e4-6dcbb81438da -t 2000 00:21:51.898 [ 00:21:51.898 { 00:21:51.898 "name": "56b563e5-af30-4d2b-b4e4-6dcbb81438da", 00:21:51.898 "aliases": [ 00:21:51.898 "lvs/lvol" 00:21:51.898 ], 00:21:51.898 "product_name": "Logical Volume", 00:21:51.898 "block_size": 4096, 00:21:51.898 "num_blocks": 38912, 00:21:51.898 "uuid": "56b563e5-af30-4d2b-b4e4-6dcbb81438da", 00:21:51.898 "assigned_rate_limits": { 00:21:51.898 "rw_ios_per_sec": 0, 00:21:51.898 "rw_mbytes_per_sec": 0, 00:21:51.898 "r_mbytes_per_sec": 0, 00:21:51.898 "w_mbytes_per_sec": 0 00:21:51.898 }, 00:21:51.898 "claimed": false, 00:21:51.898 "zoned": false, 00:21:51.898 "supported_io_types": { 00:21:51.898 "read": true, 00:21:51.898 "write": true, 00:21:51.898 "unmap": true, 00:21:51.898 "flush": false, 00:21:51.898 "reset": true, 00:21:51.898 "nvme_admin": false, 00:21:51.898 "nvme_io": false, 00:21:51.898 "nvme_io_md": false, 00:21:51.898 "write_zeroes": true, 00:21:51.898 "zcopy": false, 00:21:51.898 "get_zone_info": false, 00:21:51.898 "zone_management": false, 00:21:51.898 "zone_append": false, 00:21:51.898 "compare": false, 00:21:51.898 "compare_and_write": false, 00:21:51.899 "abort": false, 00:21:51.899 "seek_hole": true, 00:21:51.899 "seek_data": true, 00:21:51.899 "copy": false, 00:21:51.899 "nvme_iov_md": false 00:21:51.899 }, 00:21:51.899 "driver_specific": { 00:21:51.899 "lvol": { 00:21:51.899 "lvol_store_uuid": "53e5bb98-ceea-47a2-a381-f0b49e3e58db", 00:21:51.899 "base_bdev": "aio_bdev", 00:21:51.899 "thin_provision": false, 00:21:51.899 "num_allocated_clusters": 38, 00:21:51.899 "snapshot": false, 00:21:51.899 "clone": false, 00:21:51.899 "esnap_clone": false 00:21:51.899 } 00:21:51.899 } 00:21:51.899 } 00:21:51.899 ] 00:21:51.899 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:21:51.899 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:51.899 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:21:52.157 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:21:52.157 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:52.157 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:21:52.157 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:21:52.157 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:52.416 [2024-12-09 10:32:53.492384] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:52.416 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:52.675 request: 00:21:52.675 { 00:21:52.675 "uuid": "53e5bb98-ceea-47a2-a381-f0b49e3e58db", 00:21:52.675 "method": "bdev_lvol_get_lvstores", 00:21:52.675 "req_id": 1 00:21:52.675 } 00:21:52.675 Got JSON-RPC error response 00:21:52.675 response: 00:21:52.675 { 00:21:52.675 "code": -19, 00:21:52.675 "message": "No such device" 00:21:52.675 } 00:21:52.675 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:21:52.675 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:52.675 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:52.675 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:52.675 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:52.934 aio_bdev 00:21:52.934 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 56b563e5-af30-4d2b-b4e4-6dcbb81438da 00:21:52.934 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=56b563e5-af30-4d2b-b4e4-6dcbb81438da 00:21:52.934 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:52.934 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:21:52.934 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:52.934 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:52.934 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:52.934 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 56b563e5-af30-4d2b-b4e4-6dcbb81438da -t 2000 00:21:53.192 [ 00:21:53.192 { 00:21:53.192 "name": "56b563e5-af30-4d2b-b4e4-6dcbb81438da", 00:21:53.192 "aliases": [ 00:21:53.192 "lvs/lvol" 00:21:53.192 ], 00:21:53.192 "product_name": "Logical Volume", 00:21:53.192 "block_size": 4096, 00:21:53.192 "num_blocks": 38912, 00:21:53.192 "uuid": "56b563e5-af30-4d2b-b4e4-6dcbb81438da", 00:21:53.192 "assigned_rate_limits": { 00:21:53.192 "rw_ios_per_sec": 0, 00:21:53.192 "rw_mbytes_per_sec": 0, 00:21:53.192 "r_mbytes_per_sec": 0, 00:21:53.192 "w_mbytes_per_sec": 0 00:21:53.192 }, 00:21:53.192 "claimed": false, 00:21:53.192 "zoned": false, 00:21:53.192 "supported_io_types": { 00:21:53.192 "read": true, 00:21:53.192 "write": true, 00:21:53.192 "unmap": true, 00:21:53.192 "flush": false, 00:21:53.192 "reset": true, 00:21:53.192 "nvme_admin": false, 00:21:53.192 "nvme_io": false, 00:21:53.192 "nvme_io_md": false, 00:21:53.192 "write_zeroes": true, 00:21:53.192 "zcopy": false, 00:21:53.192 "get_zone_info": false, 00:21:53.192 "zone_management": false, 00:21:53.192 "zone_append": false, 00:21:53.192 "compare": false, 00:21:53.192 "compare_and_write": false, 00:21:53.192 "abort": false, 00:21:53.192 "seek_hole": true, 00:21:53.192 "seek_data": true, 00:21:53.192 "copy": false, 00:21:53.192 "nvme_iov_md": false 00:21:53.192 }, 00:21:53.192 "driver_specific": { 00:21:53.192 "lvol": { 00:21:53.192 "lvol_store_uuid": "53e5bb98-ceea-47a2-a381-f0b49e3e58db", 00:21:53.192 "base_bdev": "aio_bdev", 00:21:53.192 "thin_provision": false, 00:21:53.192 "num_allocated_clusters": 38, 00:21:53.192 "snapshot": false, 00:21:53.192 "clone": false, 00:21:53.192 "esnap_clone": false 00:21:53.192 } 00:21:53.192 } 00:21:53.192 } 00:21:53.192 ] 00:21:53.192 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:21:53.192 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:53.192 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:21:53.451 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:21:53.451 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:53.451 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:21:53.710 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:21:53.710 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 56b563e5-af30-4d2b-b4e4-6dcbb81438da 00:21:53.710 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 53e5bb98-ceea-47a2-a381-f0b49e3e58db 00:21:53.967 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:54.225 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:54.225 00:21:54.225 real 0m16.743s 00:21:54.225 user 0m43.521s 00:21:54.225 sys 0m3.729s 00:21:54.225 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.225 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:54.225 ************************************ 00:21:54.226 END TEST lvs_grow_dirty 00:21:54.226 ************************************ 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:54.226 nvmf_trace.0 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.226 rmmod nvme_tcp 00:21:54.226 rmmod nvme_fabrics 00:21:54.226 rmmod nvme_keyring 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 519585 ']' 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 519585 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 519585 ']' 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 519585 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.226 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 519585 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 519585' 00:21:54.484 killing process with pid 519585 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 519585 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 519585 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.484 10:32:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.019 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.019 00:21:57.019 real 0m41.162s 00:21:57.019 user 1m4.155s 00:21:57.019 sys 0m9.730s 00:21:57.019 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.019 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:57.019 ************************************ 00:21:57.019 END TEST nvmf_lvs_grow 00:21:57.019 ************************************ 00:21:57.019 10:32:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:21:57.020 ************************************ 00:21:57.020 START TEST nvmf_bdev_io_wait 00:21:57.020 ************************************ 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:57.020 * Looking for test storage... 00:21:57.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:57.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.020 --rc genhtml_branch_coverage=1 00:21:57.020 --rc genhtml_function_coverage=1 00:21:57.020 --rc genhtml_legend=1 00:21:57.020 --rc geninfo_all_blocks=1 00:21:57.020 --rc geninfo_unexecuted_blocks=1 00:21:57.020 00:21:57.020 ' 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:57.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.020 --rc genhtml_branch_coverage=1 00:21:57.020 --rc genhtml_function_coverage=1 00:21:57.020 --rc genhtml_legend=1 00:21:57.020 --rc geninfo_all_blocks=1 00:21:57.020 --rc geninfo_unexecuted_blocks=1 00:21:57.020 00:21:57.020 ' 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:57.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.020 --rc genhtml_branch_coverage=1 00:21:57.020 --rc genhtml_function_coverage=1 00:21:57.020 --rc genhtml_legend=1 00:21:57.020 --rc geninfo_all_blocks=1 00:21:57.020 --rc geninfo_unexecuted_blocks=1 00:21:57.020 00:21:57.020 ' 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:57.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.020 --rc genhtml_branch_coverage=1 00:21:57.020 --rc genhtml_function_coverage=1 00:21:57.020 --rc genhtml_legend=1 00:21:57.020 --rc geninfo_all_blocks=1 00:21:57.020 --rc geninfo_unexecuted_blocks=1 00:21:57.020 00:21:57.020 ' 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.020 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.021 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:02.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:02.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:02.299 Found net devices under 0000:86:00.0: cvl_0_0 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:02.299 Found net devices under 0000:86:00.1: cvl_0_1 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.299 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:22:02.300 00:22:02.300 --- 10.0.0.2 ping statistics --- 00:22:02.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.300 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:22:02.300 00:22:02.300 --- 10.0.0.1 ping statistics --- 00:22:02.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.300 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=523894 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 523894 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 523894 ']' 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.300 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.300 [2024-12-09 10:33:03.440606] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:22:02.300 [2024-12-09 10:33:03.440651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.559 [2024-12-09 10:33:03.511750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.559 [2024-12-09 10:33:03.554318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.559 [2024-12-09 10:33:03.554357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.559 [2024-12-09 10:33:03.554364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.559 [2024-12-09 10:33:03.554370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.559 [2024-12-09 10:33:03.554376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.559 [2024-12-09 10:33:03.555971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.559 [2024-12-09 10:33:03.556071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.559 [2024-12-09 10:33:03.556176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.559 [2024-12-09 10:33:03.556179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.559 [2024-12-09 10:33:03.704605] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.559 Malloc0 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.559 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:02.818 [2024-12-09 10:33:03.760234] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=523967 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=523969 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.818 { 00:22:02.818 "params": { 00:22:02.818 "name": "Nvme$subsystem", 00:22:02.818 "trtype": "$TEST_TRANSPORT", 00:22:02.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.818 "adrfam": "ipv4", 00:22:02.818 "trsvcid": "$NVMF_PORT", 00:22:02.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.818 "hdgst": ${hdgst:-false}, 00:22:02.818 "ddgst": ${ddgst:-false} 00:22:02.818 }, 00:22:02.818 "method": "bdev_nvme_attach_controller" 00:22:02.818 } 00:22:02.818 EOF 00:22:02.818 )") 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=523971 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.818 { 00:22:02.818 "params": { 00:22:02.818 "name": "Nvme$subsystem", 00:22:02.818 "trtype": "$TEST_TRANSPORT", 00:22:02.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.818 "adrfam": "ipv4", 00:22:02.818 "trsvcid": "$NVMF_PORT", 00:22:02.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.818 "hdgst": ${hdgst:-false}, 00:22:02.818 "ddgst": ${ddgst:-false} 00:22:02.818 }, 00:22:02.818 "method": "bdev_nvme_attach_controller" 00:22:02.818 } 00:22:02.818 EOF 00:22:02.818 )") 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=523974 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.818 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.818 { 00:22:02.818 "params": { 00:22:02.818 "name": "Nvme$subsystem", 00:22:02.819 "trtype": "$TEST_TRANSPORT", 00:22:02.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.819 "adrfam": "ipv4", 00:22:02.819 "trsvcid": "$NVMF_PORT", 00:22:02.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.819 "hdgst": ${hdgst:-false}, 00:22:02.819 "ddgst": ${ddgst:-false} 00:22:02.819 }, 00:22:02.819 "method": "bdev_nvme_attach_controller" 00:22:02.819 } 00:22:02.819 EOF 00:22:02.819 )") 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.819 { 00:22:02.819 "params": { 00:22:02.819 "name": "Nvme$subsystem", 00:22:02.819 "trtype": "$TEST_TRANSPORT", 00:22:02.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.819 "adrfam": "ipv4", 00:22:02.819 "trsvcid": "$NVMF_PORT", 00:22:02.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.819 "hdgst": ${hdgst:-false}, 00:22:02.819 "ddgst": ${ddgst:-false} 00:22:02.819 }, 00:22:02.819 "method": "bdev_nvme_attach_controller" 00:22:02.819 } 00:22:02.819 EOF 00:22:02.819 )") 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 523967 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:02.819 "params": { 00:22:02.819 "name": "Nvme1", 00:22:02.819 "trtype": "tcp", 00:22:02.819 "traddr": "10.0.0.2", 00:22:02.819 "adrfam": "ipv4", 00:22:02.819 "trsvcid": "4420", 00:22:02.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.819 "hdgst": false, 00:22:02.819 "ddgst": false 00:22:02.819 }, 00:22:02.819 "method": "bdev_nvme_attach_controller" 00:22:02.819 }' 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:02.819 "params": { 00:22:02.819 "name": "Nvme1", 00:22:02.819 "trtype": "tcp", 00:22:02.819 "traddr": "10.0.0.2", 00:22:02.819 "adrfam": "ipv4", 00:22:02.819 "trsvcid": "4420", 00:22:02.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.819 "hdgst": false, 00:22:02.819 "ddgst": false 00:22:02.819 }, 00:22:02.819 "method": "bdev_nvme_attach_controller" 00:22:02.819 }' 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:02.819 "params": { 00:22:02.819 "name": "Nvme1", 00:22:02.819 "trtype": "tcp", 00:22:02.819 "traddr": "10.0.0.2", 00:22:02.819 "adrfam": "ipv4", 00:22:02.819 "trsvcid": "4420", 00:22:02.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.819 "hdgst": false, 00:22:02.819 "ddgst": false 00:22:02.819 }, 00:22:02.819 "method": "bdev_nvme_attach_controller" 00:22:02.819 }' 00:22:02.819 10:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:02.819 "params": { 00:22:02.819 "name": "Nvme1", 00:22:02.819 "trtype": "tcp", 00:22:02.819 "traddr": "10.0.0.2", 00:22:02.819 "adrfam": "ipv4", 00:22:02.819 "trsvcid": "4420", 00:22:02.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.819 "hdgst": false, 00:22:02.819 "ddgst": false 00:22:02.819 }, 00:22:02.819 "method": "bdev_nvme_attach_controller" 00:22:02.819 }' 00:22:02.819 [2024-12-09 10:33:03.812066] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:22:02.819 [2024-12-09 10:33:03.812117] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:02.819 [2024-12-09 10:33:03.812857] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:22:02.819 [2024-12-09 10:33:03.812903] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:22:02.819 [2024-12-09 10:33:03.815423] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:22:02.819 [2024-12-09 10:33:03.815466] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:22:02.819 [2024-12-09 10:33:03.816890] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:22:02.819 [2024-12-09 10:33:03.816933] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:22:03.078 [2024-12-09 10:33:04.005045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.078 [2024-12-09 10:33:04.048025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:03.078 [2024-12-09 10:33:04.101398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.078 [2024-12-09 10:33:04.153427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.078 [2024-12-09 10:33:04.161735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:03.078 [2024-12-09 10:33:04.196391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:03.078 [2024-12-09 10:33:04.213630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.337 [2024-12-09 10:33:04.256435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:22:03.337 Running I/O for 1 seconds... 00:22:03.337 Running I/O for 1 seconds... 00:22:03.337 Running I/O for 1 seconds... 00:22:03.337 Running I/O for 1 seconds... 00:22:04.271 237216.00 IOPS, 926.62 MiB/s 00:22:04.271 Latency(us) 00:22:04.271 [2024-12-09T09:33:05.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.271 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:22:04.271 Nvme1n1 : 1.00 236851.91 925.20 0.00 0.00 537.53 227.06 1531.55 00:22:04.271 [2024-12-09T09:33:05.447Z] =================================================================================================================== 00:22:04.271 [2024-12-09T09:33:05.447Z] Total : 236851.91 925.20 0.00 0.00 537.53 227.06 1531.55 00:22:04.271 7624.00 IOPS, 29.78 MiB/s 00:22:04.271 Latency(us) 00:22:04.271 [2024-12-09T09:33:05.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.271 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:22:04.271 Nvme1n1 : 1.02 7598.04 29.68 0.00 0.00 16656.37 6724.56 24960.67 00:22:04.271 [2024-12-09T09:33:05.447Z] =================================================================================================================== 00:22:04.271 [2024-12-09T09:33:05.447Z] Total : 7598.04 29.68 0.00 0.00 16656.37 6724.56 24960.67 00:22:04.271 11320.00 IOPS, 44.22 MiB/s 00:22:04.271 Latency(us) 00:22:04.271 [2024-12-09T09:33:05.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.271 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:22:04.271 Nvme1n1 : 1.01 11364.73 44.39 0.00 0.00 11218.24 6297.15 22225.25 00:22:04.271 [2024-12-09T09:33:05.447Z] =================================================================================================================== 00:22:04.271 [2024-12-09T09:33:05.447Z] Total : 11364.73 44.39 0.00 0.00 11218.24 6297.15 22225.25 00:22:04.271 7774.00 IOPS, 30.37 MiB/s 00:22:04.271 Latency(us) 00:22:04.271 [2024-12-09T09:33:05.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.271 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:22:04.271 Nvme1n1 : 1.00 7886.03 30.80 0.00 0.00 16200.70 2478.97 39891.48 00:22:04.271 [2024-12-09T09:33:05.447Z] =================================================================================================================== 00:22:04.271 [2024-12-09T09:33:05.447Z] Total : 7886.03 30.80 0.00 0.00 16200.70 2478.97 39891.48 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 523969 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 523971 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 523974 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.530 rmmod nvme_tcp 00:22:04.530 rmmod nvme_fabrics 00:22:04.530 rmmod nvme_keyring 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 523894 ']' 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 523894 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 523894 ']' 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 523894 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.530 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 523894 00:22:04.788 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.788 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.788 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 523894' 00:22:04.788 killing process with pid 523894 00:22:04.788 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 523894 00:22:04.788 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 523894 00:22:04.788 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.789 10:33:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.321 10:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.321 00:22:07.321 real 0m10.215s 00:22:07.321 user 0m15.903s 00:22:07.321 sys 0m5.650s 00:22:07.321 10:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.321 10:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:07.321 ************************************ 00:22:07.321 END TEST nvmf_bdev_io_wait 00:22:07.321 ************************************ 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:07.321 ************************************ 00:22:07.321 START TEST nvmf_queue_depth 00:22:07.321 ************************************ 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:07.321 * Looking for test storage... 00:22:07.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.321 --rc genhtml_branch_coverage=1 00:22:07.321 --rc genhtml_function_coverage=1 00:22:07.321 --rc genhtml_legend=1 00:22:07.321 --rc geninfo_all_blocks=1 00:22:07.321 --rc geninfo_unexecuted_blocks=1 00:22:07.321 00:22:07.321 ' 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.321 --rc genhtml_branch_coverage=1 00:22:07.321 --rc genhtml_function_coverage=1 00:22:07.321 --rc genhtml_legend=1 00:22:07.321 --rc geninfo_all_blocks=1 00:22:07.321 --rc geninfo_unexecuted_blocks=1 00:22:07.321 00:22:07.321 ' 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.321 --rc genhtml_branch_coverage=1 00:22:07.321 --rc genhtml_function_coverage=1 00:22:07.321 --rc genhtml_legend=1 00:22:07.321 --rc geninfo_all_blocks=1 00:22:07.321 --rc geninfo_unexecuted_blocks=1 00:22:07.321 00:22:07.321 ' 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.321 --rc genhtml_branch_coverage=1 00:22:07.321 --rc genhtml_function_coverage=1 00:22:07.321 --rc genhtml_legend=1 00:22:07.321 --rc geninfo_all_blocks=1 00:22:07.321 --rc geninfo_unexecuted_blocks=1 00:22:07.321 00:22:07.321 ' 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.321 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:07.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.322 10:33:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:13.911 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:13.911 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:13.911 Found net devices under 0000:86:00.0: cvl_0_0 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:13.911 Found net devices under 0000:86:00.1: cvl_0_1 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:13.911 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:13.912 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.912 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.912 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.912 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.912 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:13.912 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:13.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:22:13.912 00:22:13.912 --- 10.0.0.2 ping statistics --- 00:22:13.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.912 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:22:13.912 00:22:13.912 --- 10.0.0.1 ping statistics --- 00:22:13.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.912 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=528160 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 528160 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 528160 ']' 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.912 [2024-12-09 10:33:14.126275] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:22:13.912 [2024-12-09 10:33:14.126324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.912 [2024-12-09 10:33:14.198172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.912 [2024-12-09 10:33:14.241453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.912 [2024-12-09 10:33:14.241486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.912 [2024-12-09 10:33:14.241494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.912 [2024-12-09 10:33:14.241500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.912 [2024-12-09 10:33:14.241506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.912 [2024-12-09 10:33:14.242075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.912 [2024-12-09 10:33:14.379128] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.912 Malloc0 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.912 [2024-12-09 10:33:14.425528] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=528297 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 528297 /var/tmp/bdevperf.sock 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 528297 ']' 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.912 [2024-12-09 10:33:14.475949] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:22:13.912 [2024-12-09 10:33:14.475990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528297 ] 00:22:13.912 [2024-12-09 10:33:14.541343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.912 [2024-12-09 10:33:14.584903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.912 NVMe0n1 00:22:13.912 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.913 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:13.913 Running I/O for 10 seconds... 00:22:15.783 11265.00 IOPS, 44.00 MiB/s [2024-12-09T09:33:17.898Z] 11745.50 IOPS, 45.88 MiB/s [2024-12-09T09:33:19.278Z] 11770.00 IOPS, 45.98 MiB/s [2024-12-09T09:33:20.217Z] 11777.00 IOPS, 46.00 MiB/s [2024-12-09T09:33:21.155Z] 11877.00 IOPS, 46.39 MiB/s [2024-12-09T09:33:22.092Z] 11927.00 IOPS, 46.59 MiB/s [2024-12-09T09:33:23.114Z] 11983.57 IOPS, 46.81 MiB/s [2024-12-09T09:33:24.046Z] 12021.50 IOPS, 46.96 MiB/s [2024-12-09T09:33:24.981Z] 12041.22 IOPS, 47.04 MiB/s [2024-12-09T09:33:24.981Z] 12064.60 IOPS, 47.13 MiB/s 00:22:23.805 Latency(us) 00:22:23.805 [2024-12-09T09:33:24.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.805 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:22:23.805 Verification LBA range: start 0x0 length 0x4000 00:22:23.805 NVMe0n1 : 10.05 12105.57 47.29 0.00 0.00 84315.28 9744.92 58127.58 00:22:23.805 [2024-12-09T09:33:24.981Z] =================================================================================================================== 00:22:23.805 [2024-12-09T09:33:24.981Z] Total : 12105.57 47.29 0.00 0.00 84315.28 9744.92 58127.58 00:22:23.805 { 00:22:23.805 "results": [ 00:22:23.805 { 00:22:23.805 "job": "NVMe0n1", 00:22:23.805 "core_mask": "0x1", 00:22:23.805 "workload": "verify", 00:22:23.805 "status": "finished", 00:22:23.805 "verify_range": { 00:22:23.805 "start": 0, 00:22:23.805 "length": 16384 00:22:23.805 }, 00:22:23.805 "queue_depth": 1024, 00:22:23.805 "io_size": 4096, 00:22:23.805 "runtime": 10.050745, 00:22:23.805 "iops": 12105.570283595893, 00:22:23.805 "mibps": 47.287383920296456, 00:22:23.805 "io_failed": 0, 00:22:23.805 "io_timeout": 0, 00:22:23.805 "avg_latency_us": 84315.2830604236, 00:22:23.805 "min_latency_us": 9744.918260869565, 00:22:23.805 "max_latency_us": 58127.58260869565 00:22:23.805 } 00:22:23.805 ], 00:22:23.805 "core_count": 1 00:22:23.805 } 00:22:24.063 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 528297 00:22:24.063 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 528297 ']' 00:22:24.063 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 528297 00:22:24.063 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:22:24.063 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.063 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528297 00:22:24.063 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.063 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.063 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528297' 00:22:24.063 killing process with pid 528297 00:22:24.063 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 528297 00:22:24.063 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.063 00:22:24.063 Latency(us) 00:22:24.063 [2024-12-09T09:33:25.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.063 [2024-12-09T09:33:25.239Z] =================================================================================================================== 00:22:24.063 [2024-12-09T09:33:25.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.063 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 528297 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.321 rmmod nvme_tcp 00:22:24.321 rmmod nvme_fabrics 00:22:24.321 rmmod nvme_keyring 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 528160 ']' 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 528160 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 528160 ']' 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 528160 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528160 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528160' 00:22:24.321 killing process with pid 528160 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 528160 00:22:24.321 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 528160 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.578 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.485 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.485 00:22:26.485 real 0m19.595s 00:22:26.485 user 0m23.098s 00:22:26.485 sys 0m5.930s 00:22:26.485 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:26.745 ************************************ 00:22:26.745 END TEST nvmf_queue_depth 00:22:26.745 ************************************ 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:26.745 ************************************ 00:22:26.745 START TEST nvmf_target_multipath 00:22:26.745 ************************************ 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:26.745 * Looking for test storage... 00:22:26.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:26.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.745 --rc genhtml_branch_coverage=1 00:22:26.745 --rc genhtml_function_coverage=1 00:22:26.745 --rc genhtml_legend=1 00:22:26.745 --rc geninfo_all_blocks=1 00:22:26.745 --rc geninfo_unexecuted_blocks=1 00:22:26.745 00:22:26.745 ' 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:26.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.745 --rc genhtml_branch_coverage=1 00:22:26.745 --rc genhtml_function_coverage=1 00:22:26.745 --rc genhtml_legend=1 00:22:26.745 --rc geninfo_all_blocks=1 00:22:26.745 --rc geninfo_unexecuted_blocks=1 00:22:26.745 00:22:26.745 ' 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:26.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.745 --rc genhtml_branch_coverage=1 00:22:26.745 --rc genhtml_function_coverage=1 00:22:26.745 --rc genhtml_legend=1 00:22:26.745 --rc geninfo_all_blocks=1 00:22:26.745 --rc geninfo_unexecuted_blocks=1 00:22:26.745 00:22:26.745 ' 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:26.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.745 --rc genhtml_branch_coverage=1 00:22:26.745 --rc genhtml_function_coverage=1 00:22:26.745 --rc genhtml_legend=1 00:22:26.745 --rc geninfo_all_blocks=1 00:22:26.745 --rc geninfo_unexecuted_blocks=1 00:22:26.745 00:22:26.745 ' 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.745 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.746 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.746 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.746 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.005 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:27.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.006 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:32.275 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:32.275 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:32.275 Found net devices under 0000:86:00.0: cvl_0_0 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:32.275 Found net devices under 0000:86:00.1: cvl_0_1 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.275 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.276 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.276 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.276 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.276 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.276 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.276 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.276 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.276 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.276 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.534 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.534 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.534 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.534 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:22:32.534 00:22:32.534 --- 10.0.0.2 ping statistics --- 00:22:32.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.534 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:22:32.534 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:22:32.534 00:22:32.534 --- 10.0.0.1 ping statistics --- 00:22:32.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.534 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:22:32.534 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:22:32.535 only one NIC for nvmf test 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.535 rmmod nvme_tcp 00:22:32.535 rmmod nvme_fabrics 00:22:32.535 rmmod nvme_keyring 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.535 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:22:35.069 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.070 00:22:35.070 real 0m8.043s 00:22:35.070 user 0m1.800s 00:22:35.070 sys 0m4.268s 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:35.070 ************************************ 00:22:35.070 END TEST nvmf_target_multipath 00:22:35.070 ************************************ 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:35.070 ************************************ 00:22:35.070 START TEST nvmf_zcopy 00:22:35.070 ************************************ 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:35.070 * Looking for test storage... 00:22:35.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:22:35.070 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:35.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.070 --rc genhtml_branch_coverage=1 00:22:35.070 --rc genhtml_function_coverage=1 00:22:35.070 --rc genhtml_legend=1 00:22:35.070 --rc geninfo_all_blocks=1 00:22:35.070 --rc geninfo_unexecuted_blocks=1 00:22:35.070 00:22:35.070 ' 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:35.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.070 --rc genhtml_branch_coverage=1 00:22:35.070 --rc genhtml_function_coverage=1 00:22:35.070 --rc genhtml_legend=1 00:22:35.070 --rc geninfo_all_blocks=1 00:22:35.070 --rc geninfo_unexecuted_blocks=1 00:22:35.070 00:22:35.070 ' 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:35.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.070 --rc genhtml_branch_coverage=1 00:22:35.070 --rc genhtml_function_coverage=1 00:22:35.070 --rc genhtml_legend=1 00:22:35.070 --rc geninfo_all_blocks=1 00:22:35.070 --rc geninfo_unexecuted_blocks=1 00:22:35.070 00:22:35.070 ' 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:35.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.070 --rc genhtml_branch_coverage=1 00:22:35.070 --rc genhtml_function_coverage=1 00:22:35.070 --rc genhtml_legend=1 00:22:35.070 --rc geninfo_all_blocks=1 00:22:35.070 --rc geninfo_unexecuted_blocks=1 00:22:35.070 00:22:35.070 ' 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.070 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.071 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.343 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:40.344 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:40.344 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:40.344 Found net devices under 0000:86:00.0: cvl_0_0 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:40.344 Found net devices under 0000:86:00.1: cvl_0_1 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:22:40.344 00:22:40.344 --- 10.0.0.2 ping statistics --- 00:22:40.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.344 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:22:40.344 00:22:40.344 --- 10.0.0.1 ping statistics --- 00:22:40.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.344 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=537080 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 537080 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 537080 ']' 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.344 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:40.603 [2024-12-09 10:33:41.554811] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:22:40.603 [2024-12-09 10:33:41.554853] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.603 [2024-12-09 10:33:41.624213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.603 [2024-12-09 10:33:41.664569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.603 [2024-12-09 10:33:41.664608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.603 [2024-12-09 10:33:41.664619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.603 [2024-12-09 10:33:41.664625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.603 [2024-12-09 10:33:41.664630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.603 [2024-12-09 10:33:41.665209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.603 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.603 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:22:40.603 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.603 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.603 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:40.862 [2024-12-09 10:33:41.801718] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:40.862 [2024-12-09 10:33:41.817920] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:40.862 malloc0 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.862 { 00:22:40.862 "params": { 00:22:40.862 "name": "Nvme$subsystem", 00:22:40.862 "trtype": "$TEST_TRANSPORT", 00:22:40.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.862 "adrfam": "ipv4", 00:22:40.862 "trsvcid": "$NVMF_PORT", 00:22:40.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.862 "hdgst": ${hdgst:-false}, 00:22:40.862 "ddgst": ${ddgst:-false} 00:22:40.862 }, 00:22:40.862 "method": "bdev_nvme_attach_controller" 00:22:40.862 } 00:22:40.862 EOF 00:22:40.862 )") 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:22:40.862 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:40.862 "params": { 00:22:40.862 "name": "Nvme1", 00:22:40.862 "trtype": "tcp", 00:22:40.862 "traddr": "10.0.0.2", 00:22:40.862 "adrfam": "ipv4", 00:22:40.862 "trsvcid": "4420", 00:22:40.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.862 "hdgst": false, 00:22:40.862 "ddgst": false 00:22:40.862 }, 00:22:40.862 "method": "bdev_nvme_attach_controller" 00:22:40.862 }' 00:22:40.862 [2024-12-09 10:33:41.896194] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:22:40.862 [2024-12-09 10:33:41.896236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid537103 ] 00:22:40.862 [2024-12-09 10:33:41.961649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.862 [2024-12-09 10:33:42.002678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.121 Running I/O for 10 seconds... 00:22:43.433 8495.00 IOPS, 66.37 MiB/s [2024-12-09T09:33:45.546Z] 8582.50 IOPS, 67.05 MiB/s [2024-12-09T09:33:46.483Z] 8592.00 IOPS, 67.12 MiB/s [2024-12-09T09:33:47.419Z] 8589.00 IOPS, 67.10 MiB/s [2024-12-09T09:33:48.357Z] 8591.80 IOPS, 67.12 MiB/s [2024-12-09T09:33:49.733Z] 8593.00 IOPS, 67.13 MiB/s [2024-12-09T09:33:50.699Z] 8596.00 IOPS, 67.16 MiB/s [2024-12-09T09:33:51.633Z] 8590.12 IOPS, 67.11 MiB/s [2024-12-09T09:33:52.565Z] 8586.33 IOPS, 67.08 MiB/s [2024-12-09T09:33:52.565Z] 8592.10 IOPS, 67.13 MiB/s 00:22:51.389 Latency(us) 00:22:51.389 [2024-12-09T09:33:52.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.389 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:22:51.389 Verification LBA range: start 0x0 length 0x1000 00:22:51.389 Nvme1n1 : 10.01 8594.30 67.14 0.00 0.00 14850.88 648.24 22453.20 00:22:51.389 [2024-12-09T09:33:52.565Z] =================================================================================================================== 00:22:51.389 [2024-12-09T09:33:52.565Z] Total : 8594.30 67.14 0.00 0.00 14850.88 648.24 22453.20 00:22:51.389 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=538934 00:22:51.389 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:22:51.389 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:51.389 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:22:51.389 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:22:51.389 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:22:51.389 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:22:51.389 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.389 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.389 { 00:22:51.389 "params": { 00:22:51.389 "name": "Nvme$subsystem", 00:22:51.389 "trtype": "$TEST_TRANSPORT", 00:22:51.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.389 "adrfam": "ipv4", 00:22:51.389 "trsvcid": "$NVMF_PORT", 00:22:51.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.389 "hdgst": ${hdgst:-false}, 00:22:51.389 "ddgst": ${ddgst:-false} 00:22:51.389 }, 00:22:51.389 "method": "bdev_nvme_attach_controller" 00:22:51.389 } 00:22:51.389 EOF 00:22:51.389 )") 00:22:51.390 [2024-12-09 10:33:52.523662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.390 [2024-12-09 10:33:52.523697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.390 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:22:51.390 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:22:51.390 [2024-12-09 10:33:52.531645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.390 [2024-12-09 10:33:52.531659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.390 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:22:51.390 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:51.390 "params": { 00:22:51.390 "name": "Nvme1", 00:22:51.390 "trtype": "tcp", 00:22:51.390 "traddr": "10.0.0.2", 00:22:51.390 "adrfam": "ipv4", 00:22:51.390 "trsvcid": "4420", 00:22:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.390 "hdgst": false, 00:22:51.390 "ddgst": false 00:22:51.390 }, 00:22:51.390 "method": "bdev_nvme_attach_controller" 00:22:51.390 }' 00:22:51.390 [2024-12-09 10:33:52.539661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.390 [2024-12-09 10:33:52.539673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.390 [2024-12-09 10:33:52.547682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.390 [2024-12-09 10:33:52.547692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.390 [2024-12-09 10:33:52.555702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.390 [2024-12-09 10:33:52.555713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.390 [2024-12-09 10:33:52.563291] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:22:51.390 [2024-12-09 10:33:52.563333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid538934 ] 00:22:51.390 [2024-12-09 10:33:52.563725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.390 [2024-12-09 10:33:52.563735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.571747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.571757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.579768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.579778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.587793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.587807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.595812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.595823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.603834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.603844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.611855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.611869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.619876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.619885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.627110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.649 [2024-12-09 10:33:52.627896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.627906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.635918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.635932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.643938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.643953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.651959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.651969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.659983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.659994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.668009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.668040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.669091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.649 [2024-12-09 10:33:52.676041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.676052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.684076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.684095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.692076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.692093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.700096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.700111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.708115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.708128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.716134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.716147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.724155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.724169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.732178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.732192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.740200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.740215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.748230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.748240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.756256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.756280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.649 [2024-12-09 10:33:52.764275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.649 [2024-12-09 10:33:52.764292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.650 [2024-12-09 10:33:52.772293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.650 [2024-12-09 10:33:52.772307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.650 [2024-12-09 10:33:52.780313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.650 [2024-12-09 10:33:52.780329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.650 [2024-12-09 10:33:52.788333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.650 [2024-12-09 10:33:52.788345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.650 [2024-12-09 10:33:52.796350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.650 [2024-12-09 10:33:52.796361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.650 [2024-12-09 10:33:52.804372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.650 [2024-12-09 10:33:52.804382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.650 [2024-12-09 10:33:52.812395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.650 [2024-12-09 10:33:52.812405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.650 [2024-12-09 10:33:52.820420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.650 [2024-12-09 10:33:52.820430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.828445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.828459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.836468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.836481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.844486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.844497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.852508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.852517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.860530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.860540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.868551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.868560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.876573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.876584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.884598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.884610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.892618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.892628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.900641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.900652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.908663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.908677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.916686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.916696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.924713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.924727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.932730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.932739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.940752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.940762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.948774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.948784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.956796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.956807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.964818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.964829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.972839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.972849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:52.980869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.980887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 Running I/O for 5 seconds... 00:22:51.909 [2024-12-09 10:33:52.988886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:52.988897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:53.001557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:53.001579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:53.009500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:53.009521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:53.017368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:53.017387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:53.026797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:53.026817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:53.035609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:53.035629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:53.044235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:53.044253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:53.053555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:53.053574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:53.063167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:53.063197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:53.070135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:53.070158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:51.909 [2024-12-09 10:33:53.080586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:51.909 [2024-12-09 10:33:53.080607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.089276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.089295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.097858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.097877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.107446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.107467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.116655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.116676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.125891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.125912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.135368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.135389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.144802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.144822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.153936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.153956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.162647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.162666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.171720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.171739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.181188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.181208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.190361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.190380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.199790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.199809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.209123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.209142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.218368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.218387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.227688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.227708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.236297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.236316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.244962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.244981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.252007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.252025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.263383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.263403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.272092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.272112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.281476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.281496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.290853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.290872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.299696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.299715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.309018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.309053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.318544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.318563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.327105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.327124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.169 [2024-12-09 10:33:53.335726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.169 [2024-12-09 10:33:53.335745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.344415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.344435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.353827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.353846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.362598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.362617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.369538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.369556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.380660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.380680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.390010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.390030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.399411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.399430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.406340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.406360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.417358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.417378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.426760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.426779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.435668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.435688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.444932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.444951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.453746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.453764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.462553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.462573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.471729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.471748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.481015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.481034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.490265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.490284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.499735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.428 [2024-12-09 10:33:53.499753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.428 [2024-12-09 10:33:53.508394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.508413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.429 [2024-12-09 10:33:53.517089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.517108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.429 [2024-12-09 10:33:53.525580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.525600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.429 [2024-12-09 10:33:53.534346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.534365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.429 [2024-12-09 10:33:53.543708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.543726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.429 [2024-12-09 10:33:53.553081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.553100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.429 [2024-12-09 10:33:53.562253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.562272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.429 [2024-12-09 10:33:53.571677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.571697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.429 [2024-12-09 10:33:53.580749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.580769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.429 [2024-12-09 10:33:53.589850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.589868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.429 [2024-12-09 10:33:53.599335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.429 [2024-12-09 10:33:53.599355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.688 [2024-12-09 10:33:53.608777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.608795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.617434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.617452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.626700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.626718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.635557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.635575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.644206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.644225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.653095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.653113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.662378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.662397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.671987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.672014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.681364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.681382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.688584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.688602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.698592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.698611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.707501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.707522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.716971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.716991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.726399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.726417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.733255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.733273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.744278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.744298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.753050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.753069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.762131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.762150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.771407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.771425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.780077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.780095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.788553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.788572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.797853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.797872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.806756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.806776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.813652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.813671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.824095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.824114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.833309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.833328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.841941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.841959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.851048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.851067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.689 [2024-12-09 10:33:53.859742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.689 [2024-12-09 10:33:53.859761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.948 [2024-12-09 10:33:53.869145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.869164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.877896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.877915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.887184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.887203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.895885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.895904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.905253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.905272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.913917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.913936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.923035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.923058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.932284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.932304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.940958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.940977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.950210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.950229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.959618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.959637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.968239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.968257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.977073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.977092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:53.986423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.986442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 16400.00 IOPS, 128.12 MiB/s [2024-12-09T09:33:54.125Z] [2024-12-09 10:33:53.994935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:53.994954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.003579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.003598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.012174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.012193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.020878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.020896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.030242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.030272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.039461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.039480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.048221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.048239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.057592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.057611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.066294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.066313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.075516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.075534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.084199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.084218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.093532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.093556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.103003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.103022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.111672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.111691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:52.949 [2024-12-09 10:33:54.120870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:52.949 [2024-12-09 10:33:54.120889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.130206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.130225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.139709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.139727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.149227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.149246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.158563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.158581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.166175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.166194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.176682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.176701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.185464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.185483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.194524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.194542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.203976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.203996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.213095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.213115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.222412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.222431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.231055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.231074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.240203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.240222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.249454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.249472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.258698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.258717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.268087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.268110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.276872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.276891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.285590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.285609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.294901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.294920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.303699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.303718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.312559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.312578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.321858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.321876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.330573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.330592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.339800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.339818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.349195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.349213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.357948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.357966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.367264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.367283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.209 [2024-12-09 10:33:54.376009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.209 [2024-12-09 10:33:54.376028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.384820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.384839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.394109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.394127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.403567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.403584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.412231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.412250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.420901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.420919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.430195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.430214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.439343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.439362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.448812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.448830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.457692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.457711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.467188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.467207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.476405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.476425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.484966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.484986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.491931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.491949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.502083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.502103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.510655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.510675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.520007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.520027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.529261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.529281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.538137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.538157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.547493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.547512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.556842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.556861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.566283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.566303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.575516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.575535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.584282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.584300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.593460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.593480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.602854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.602873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.611933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.611952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.621112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.621130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.630173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.630192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.469 [2024-12-09 10:33:54.640036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.469 [2024-12-09 10:33:54.640056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.648903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.648922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.658266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.658285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.668136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.668155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.676946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.676966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.686059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.686078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.694812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.694831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.703943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.703963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.712576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.712595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.721290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.721310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.728386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.728405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.739447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.739469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.748258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.748278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.756968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.729 [2024-12-09 10:33:54.756987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.729 [2024-12-09 10:33:54.765734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.765752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.772800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.772818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.784163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.784183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.793652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.793673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.802216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.802234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.811344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.811363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.820829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.820849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.830184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.830203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.839293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.839312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.848762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.848781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.858118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.858138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.868148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.868168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.877814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.877833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.886555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.886574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.895193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.895212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.730 [2024-12-09 10:33:54.903952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.730 [2024-12-09 10:33:54.903972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:54.912760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:54.912780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:54.921241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:54.921259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:54.930027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:54.930047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:54.939248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:54.939267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:54.948441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:54.948459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:54.957690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:54.957712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:54.967553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:54.967571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:54.976160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:54.976178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:54.985361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:54.985379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 16486.50 IOPS, 128.80 MiB/s [2024-12-09T09:33:55.166Z] [2024-12-09 10:33:54.994559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:54.994577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.004323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.004341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.012995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.013020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.021680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.021699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.030667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.030685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.039824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.039842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.049022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.049057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.058561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.058580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.067855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.067873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.076718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.076737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.085412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.085430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.094548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.094566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.103155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.103173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.112439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.112458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.121893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.121917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.131250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.131269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.139972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.139990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.149427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.149452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.990 [2024-12-09 10:33:55.158098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.990 [2024-12-09 10:33:55.158118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.167474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.167493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.176859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.176878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.186187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.186206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.195554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.195572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.204762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.204781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.211670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.211688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.222175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.222194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.231408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.231427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.240137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.240156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.249178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.249198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.257727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.257746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.267014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.267034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.276463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.276482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.285730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.285750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.294296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.294318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.303031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.303056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.311783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.311801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.321240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.321258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.329831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.329849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.339141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.339161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.348636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.348655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.357717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.357736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.366873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.366893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.375465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.375483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.384957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.384977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.393631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.393650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.402412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.402431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.411561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.411580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.249 [2024-12-09 10:33:55.420788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.249 [2024-12-09 10:33:55.420807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.429994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.430019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.439229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.439248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.446142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.446160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.457144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.457163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.465915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.465938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.474565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.474583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.483366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.483385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.492731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.492751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.502120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.502138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.511317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.511336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.520018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.520036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.528701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.528719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.538158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.538177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.546851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.546870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.555483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.555501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.564201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.564220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.573592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.573611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.583112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.583131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.591865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.591884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.600514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.600533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.609335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.609355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.618122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.618141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.627308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.627328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.636756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.636775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.646589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.646608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.655357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.655376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.665257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.665276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.508 [2024-12-09 10:33:55.674613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.508 [2024-12-09 10:33:55.674631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.683338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.683357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.692064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.692083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.701454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.701474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.711413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.711432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.720221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.720240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.729579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.729598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.738228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.738247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.747055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.747074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.756271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.756290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.765673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.765694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.774978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.775003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.783632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.783651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.792226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.792245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.801666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.801684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.811468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.811487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.820339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.820359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.829864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.829883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.839226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.839244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.847931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.847951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.856451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.856470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.865263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.865283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.875073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.875093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.883877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.883897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.892514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.892535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.901145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.901164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.909908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.909928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.918466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.918486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.927161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.927181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.768 [2024-12-09 10:33:55.937109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.768 [2024-12-09 10:33:55.937129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:55.946091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:55.946110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:55.955515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:55.955534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:55.964362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:55.964381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:55.973637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:55.973657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:55.983090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:55.983110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:55.992439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:55.992458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 16491.00 IOPS, 128.84 MiB/s [2024-12-09T09:33:56.204Z] [2024-12-09 10:33:56.001088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.001108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.010322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.010343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.019742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.019762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.029132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.029151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.038646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.038666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.048036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.048056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.057349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.057368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.066057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.066078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.075329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.075348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.084742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.084763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.093959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.093979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.103306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.103325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.111831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.111851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.121156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.121175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.130380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.130399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.139711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.139730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.149026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.149050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.157685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.157705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.166863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.166882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.175565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.175584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.184755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.184774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.194304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.194324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.028 [2024-12-09 10:33:56.203027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.028 [2024-12-09 10:33:56.203047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.211809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.211829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.221209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.221229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.230507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.230527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.239057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.239076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.248463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.248483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.257072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.257091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.266429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.266448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.275898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.275917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.285368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.285387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.294671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.294690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.303691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.303709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.312918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.312937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.321490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.321513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.328771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.328790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.339466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.339485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.348200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.348219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.357453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.357471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.366227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.366246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.375468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.375486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.384130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.384149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.393230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.393248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.402727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.402745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.411357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.411376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.420555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.420574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.429343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.429362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.439010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.439030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.447569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.447588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.288 [2024-12-09 10:33:56.456735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.288 [2024-12-09 10:33:56.456754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.548 [2024-12-09 10:33:56.466040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.548 [2024-12-09 10:33:56.466059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.548 [2024-12-09 10:33:56.474875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.548 [2024-12-09 10:33:56.474894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.548 [2024-12-09 10:33:56.483638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.548 [2024-12-09 10:33:56.483657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.548 [2024-12-09 10:33:56.492981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.548 [2024-12-09 10:33:56.493013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.548 [2024-12-09 10:33:56.502114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.548 [2024-12-09 10:33:56.502132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.548 [2024-12-09 10:33:56.511431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.548 [2024-12-09 10:33:56.511449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.548 [2024-12-09 10:33:56.520980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.521005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.530384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.530402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.539766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.539785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.548974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.548993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.558332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.558350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.565477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.565495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.575724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.575743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.585183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.585202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.593895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.593913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.603216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.603234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.612365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.612384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.621711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.621731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.630627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.630646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.640070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.640088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.649422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.649441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.658776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.658795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.668045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.668068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.677734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.677753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.686487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.686505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.695280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.695298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.704415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.704434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.713702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.713721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.549 [2024-12-09 10:33:56.722339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.549 [2024-12-09 10:33:56.722357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.731709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.731727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.741886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.741904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.750804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.750822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.759480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.759499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.768301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.768320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.777383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.777402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.785933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.785953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.794768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.794787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.803217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.803236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.811756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.811774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.818783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.818801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.830143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.830162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.838937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.838956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.847563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.847581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.856282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.856301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.864653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.864671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.874006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.874025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.882633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.882652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.891124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.891143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.900313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.900332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.914620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.914639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.923606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.923625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.932435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.932453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.941709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.941727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.951068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.951087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.960379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.960398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.967335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.967353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.809 [2024-12-09 10:33:56.978315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.809 [2024-12-09 10:33:56.978333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:56.987177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:56.987197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:56.996464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:56.996483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 16526.25 IOPS, 129.11 MiB/s [2024-12-09T09:33:57.246Z] [2024-12-09 10:33:57.003442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.003461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.014031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.014050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.022958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.022977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.032201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.032219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.041634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.041653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.050917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.050936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.060113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.060133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.069565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.069585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.078905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.078924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.088063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.088083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.096748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.096768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.105519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.105538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.114957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.114976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.123625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.123643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.133037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.133056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.139995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.140024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.151055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.151075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.159909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.159928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.169124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.169143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.178470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.178493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.187574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.187592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.196601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.196620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.206085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.206104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.215296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.215316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.223941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.223959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.232615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.232634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.070 [2024-12-09 10:33:57.242275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.070 [2024-12-09 10:33:57.242294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.251733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.251752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.260396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.260416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.269130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.269149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.277762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.277782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.287119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.287140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.295712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.295731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.304912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.304932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.313755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.313775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.323659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.323678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.332192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.332211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.339128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.339147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.349592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.349617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.358683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.358702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.367396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.367416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.376007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.376025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.331 [2024-12-09 10:33:57.385232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.331 [2024-12-09 10:33:57.385252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.394698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.394717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.403569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.403588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.412301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.412321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.421428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.421448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.430605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.430623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.439888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.439907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.449362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.449382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.458091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.458110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.467526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.467546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.476968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.476988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.486424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.486443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.495248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.495277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.332 [2024-12-09 10:33:57.503949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.332 [2024-12-09 10:33:57.503968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.513357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.513376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.522704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.522728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.532038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.532057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.541259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.541278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.550692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.550712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.559462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.559481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.568375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.568394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.575333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.575351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.586681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.586701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.595373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.595392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.604710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.604729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.613754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.613773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.623177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.623197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.632881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.632901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.641419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.641438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.650696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.650714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.659847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.592 [2024-12-09 10:33:57.659865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.592 [2024-12-09 10:33:57.669253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.669272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.593 [2024-12-09 10:33:57.678693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.678711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.593 [2024-12-09 10:33:57.687776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.687794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.593 [2024-12-09 10:33:57.696953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.696976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.593 [2024-12-09 10:33:57.706092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.706110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.593 [2024-12-09 10:33:57.712938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.712955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.593 [2024-12-09 10:33:57.723379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.723397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.593 [2024-12-09 10:33:57.730818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.730837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.593 [2024-12-09 10:33:57.741704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.741723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.593 [2024-12-09 10:33:57.750441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.750460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.593 [2024-12-09 10:33:57.757376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.593 [2024-12-09 10:33:57.757394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.768625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.768644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.776312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.776332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.786574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.786593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.796096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.796116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.805995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.806020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.814560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.814580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.823995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.824020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.832860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.832879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.842121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.842139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.850735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.850754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.859230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.859260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.868436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.868459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.877953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.877973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.887124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.887144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.896349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.896367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.905405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.905424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.914793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.914811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.924289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.924308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.932989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.933014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.942310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.942340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.951793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.951813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.960463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.960482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.969132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.969151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.977764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.977783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.987204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.987222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:57.996538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:57.996557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 16544.20 IOPS, 129.25 MiB/s [2024-12-09T09:33:58.029Z] [2024-12-09 10:33:58.005280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:58.005299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 00:22:56.853 Latency(us) 00:22:56.853 [2024-12-09T09:33:58.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.853 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:22:56.853 Nvme1n1 : 5.01 16546.13 129.27 0.00 0.00 7728.16 3276.80 18122.13 00:22:56.853 [2024-12-09T09:33:58.029Z] =================================================================================================================== 00:22:56.853 [2024-12-09T09:33:58.029Z] Total : 16546.13 129.27 0.00 0.00 7728.16 3276.80 18122.13 00:22:56.853 [2024-12-09 10:33:58.011464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:58.011482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.853 [2024-12-09 10:33:58.019480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.853 [2024-12-09 10:33:58.019496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.854 [2024-12-09 10:33:58.027503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.854 [2024-12-09 10:33:58.027516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.035528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.035543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.043551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.043569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.051565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.051579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.059585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.059600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.067609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.067624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.075629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.075643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.083651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.083665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.091676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.091696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.099691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.099704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.107711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.107725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.115735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.115750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.123753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.123764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.131773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.131783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.139798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.139808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.147821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.147836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.155838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.155856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.163859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.163870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.171876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.171886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.179898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.179908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.187921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.187931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.195944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.195954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.203967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.203978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 [2024-12-09 10:33:58.211988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.113 [2024-12-09 10:33:58.212001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (538934) - No such process 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 538934 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:57.113 delay0 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.113 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:22:57.372 [2024-12-09 10:33:58.350697] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:05.517 Initializing NVMe Controllers 00:23:05.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:05.517 Initialization complete. Launching workers. 00:23:05.517 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 22017 00:23:05.517 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22186, failed to submit 96 00:23:05.517 success 22065, unsuccessful 121, failed 0 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.517 rmmod nvme_tcp 00:23:05.517 rmmod nvme_fabrics 00:23:05.517 rmmod nvme_keyring 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 537080 ']' 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 537080 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 537080 ']' 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 537080 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 537080 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.517 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 537080' 00:23:05.517 killing process with pid 537080 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 537080 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 537080 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.518 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.896 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.896 00:23:06.896 real 0m32.041s 00:23:06.896 user 0m42.764s 00:23:06.896 sys 0m11.575s 00:23:06.896 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.896 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:23:06.896 ************************************ 00:23:06.896 END TEST nvmf_zcopy 00:23:06.896 ************************************ 00:23:06.896 10:34:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:06.896 10:34:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:06.896 10:34:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.896 10:34:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:23:06.896 ************************************ 00:23:06.896 START TEST nvmf_nmic 00:23:06.896 ************************************ 00:23:06.896 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:06.896 * Looking for test storage... 00:23:06.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:06.897 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:06.897 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:23:06.897 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:07.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.155 --rc genhtml_branch_coverage=1 00:23:07.155 --rc genhtml_function_coverage=1 00:23:07.155 --rc genhtml_legend=1 00:23:07.155 --rc geninfo_all_blocks=1 00:23:07.155 --rc geninfo_unexecuted_blocks=1 00:23:07.155 00:23:07.155 ' 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:07.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.155 --rc genhtml_branch_coverage=1 00:23:07.155 --rc genhtml_function_coverage=1 00:23:07.155 --rc genhtml_legend=1 00:23:07.155 --rc geninfo_all_blocks=1 00:23:07.155 --rc geninfo_unexecuted_blocks=1 00:23:07.155 00:23:07.155 ' 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:07.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.155 --rc genhtml_branch_coverage=1 00:23:07.155 --rc genhtml_function_coverage=1 00:23:07.155 --rc genhtml_legend=1 00:23:07.155 --rc geninfo_all_blocks=1 00:23:07.155 --rc geninfo_unexecuted_blocks=1 00:23:07.155 00:23:07.155 ' 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:07.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.155 --rc genhtml_branch_coverage=1 00:23:07.155 --rc genhtml_function_coverage=1 00:23:07.155 --rc genhtml_legend=1 00:23:07.155 --rc geninfo_all_blocks=1 00:23:07.155 --rc geninfo_unexecuted_blocks=1 00:23:07.155 00:23:07.155 ' 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.155 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.156 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:12.477 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:12.477 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:12.477 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:12.478 Found net devices under 0000:86:00.0: cvl_0_0 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:12.478 Found net devices under 0000:86:00.1: cvl_0_1 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:23:12.478 00:23:12.478 --- 10.0.0.2 ping statistics --- 00:23:12.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.478 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:23:12.478 00:23:12.478 --- 10.0.0.1 ping statistics --- 00:23:12.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.478 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=544537 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 544537 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 544537 ']' 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.478 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.478 [2024-12-09 10:34:13.575843] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:23:12.478 [2024-12-09 10:34:13.575888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.478 [2024-12-09 10:34:13.644618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.737 [2024-12-09 10:34:13.689473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.737 [2024-12-09 10:34:13.689507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.737 [2024-12-09 10:34:13.689514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.737 [2024-12-09 10:34:13.689520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.737 [2024-12-09 10:34:13.689525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.737 [2024-12-09 10:34:13.691101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.738 [2024-12-09 10:34:13.691199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.738 [2024-12-09 10:34:13.691259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.738 [2024-12-09 10:34:13.691262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.738 [2024-12-09 10:34:13.830220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.738 Malloc0 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.738 [2024-12-09 10:34:13.901204] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:23:12.738 test case1: single bdev can't be used in multiple subsystems 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.738 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.997 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.997 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:12.997 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.997 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.997 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.997 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:23:12.997 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:23:12.997 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.997 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.997 [2024-12-09 10:34:13.929110] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:23:12.997 [2024-12-09 10:34:13.929129] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:23:12.998 [2024-12-09 10:34:13.929137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.998 request: 00:23:12.998 { 00:23:12.998 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:12.998 "namespace": { 00:23:12.998 "bdev_name": "Malloc0", 00:23:12.998 "no_auto_visible": false, 00:23:12.998 "hide_metadata": false 00:23:12.998 }, 00:23:12.998 "method": "nvmf_subsystem_add_ns", 00:23:12.998 "req_id": 1 00:23:12.998 } 00:23:12.998 Got JSON-RPC error response 00:23:12.998 response: 00:23:12.998 { 00:23:12.998 "code": -32602, 00:23:12.998 "message": "Invalid parameters" 00:23:12.998 } 00:23:12.998 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:12.998 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:23:12.998 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:23:12.998 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:23:12.998 Adding namespace failed - expected result. 00:23:12.998 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:23:12.998 test case2: host connect to nvmf target in multiple paths 00:23:12.998 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:12.998 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.998 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:12.998 [2024-12-09 10:34:13.941255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:12.998 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.998 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:14.376 10:34:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:23:15.312 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:23:15.312 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:23:15.312 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:15.312 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:15.312 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:23:17.443 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:17.443 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:17.443 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:23:17.443 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:17.443 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:17.443 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:23:17.443 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:17.443 [global] 00:23:17.443 thread=1 00:23:17.443 invalidate=1 00:23:17.443 rw=write 00:23:17.443 time_based=1 00:23:17.443 runtime=1 00:23:17.443 ioengine=libaio 00:23:17.443 direct=1 00:23:17.443 bs=4096 00:23:17.443 iodepth=1 00:23:17.443 norandommap=0 00:23:17.443 numjobs=1 00:23:17.443 00:23:17.443 verify_dump=1 00:23:17.443 verify_backlog=512 00:23:17.443 verify_state_save=0 00:23:17.443 do_verify=1 00:23:17.443 verify=crc32c-intel 00:23:17.443 [job0] 00:23:17.443 filename=/dev/nvme0n1 00:23:17.443 Could not set queue depth (nvme0n1) 00:23:17.704 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:17.704 fio-3.35 00:23:17.704 Starting 1 thread 00:23:18.633 00:23:18.633 job0: (groupid=0, jobs=1): err= 0: pid=545612: Mon Dec 9 10:34:19 2024 00:23:18.633 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:23:18.633 slat (nsec): min=6329, max=30294, avg=7320.57, stdev=1188.17 00:23:18.633 clat (usec): min=214, max=642, avg=262.41, stdev=23.91 00:23:18.633 lat (usec): min=221, max=672, avg=269.73, stdev=24.17 00:23:18.633 clat percentiles (usec): 00:23:18.633 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 239], 00:23:18.633 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:23:18.633 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 297], 00:23:18.633 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 334], 99.95th=[ 388], 00:23:18.633 | 99.99th=[ 644] 00:23:18.633 write: IOPS=2416, BW=9666KiB/s (9898kB/s)(9676KiB/1001msec); 0 zone resets 00:23:18.633 slat (nsec): min=9219, max=38298, avg=10258.57, stdev=1205.41 00:23:18.633 clat (usec): min=131, max=388, avg=171.12, stdev=19.84 00:23:18.633 lat (usec): min=141, max=404, avg=181.38, stdev=19.96 00:23:18.633 clat percentiles (usec): 00:23:18.633 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 155], 20.00th=[ 161], 00:23:18.633 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:23:18.633 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 200], 95.00th=[ 215], 00:23:18.633 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 262], 99.95th=[ 367], 00:23:18.633 | 99.99th=[ 388] 00:23:18.633 bw ( KiB/s): min= 8990, max= 8990, per=93.00%, avg=8990.00, stdev= 0.00, samples=1 00:23:18.633 iops : min= 2247, max= 2247, avg=2247.00, stdev= 0.00, samples=1 00:23:18.633 lat (usec) : 250=65.59%, 500=34.39%, 750=0.02% 00:23:18.633 cpu : usr=2.00%, sys=4.20%, ctx=4467, majf=0, minf=1 00:23:18.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.633 issued rwts: total=2048,2419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:18.633 00:23:18.633 Run status group 0 (all jobs): 00:23:18.633 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:23:18.633 WRITE: bw=9666KiB/s (9898kB/s), 9666KiB/s-9666KiB/s (9898kB/s-9898kB/s), io=9676KiB (9908kB), run=1001-1001msec 00:23:18.633 00:23:18.633 Disk stats (read/write): 00:23:18.633 nvme0n1: ios=1961/2048, merge=0/0, ticks=604/339, in_queue=943, util=95.59% 00:23:18.889 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:18.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:23:18.889 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:18.889 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:23:18.889 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:18.889 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:18.889 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:18.889 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:18.889 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:23:18.889 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:18.889 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:23:18.889 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:18.889 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:23:18.889 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:18.889 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:23:18.889 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:18.889 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:18.889 rmmod nvme_tcp 00:23:18.889 rmmod nvme_fabrics 00:23:18.889 rmmod nvme_keyring 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 544537 ']' 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 544537 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 544537 ']' 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 544537 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 544537 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 544537' 00:23:19.147 killing process with pid 544537 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 544537 00:23:19.147 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 544537 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.405 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.305 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.305 00:23:21.305 real 0m14.472s 00:23:21.305 user 0m33.385s 00:23:21.305 sys 0m4.864s 00:23:21.305 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.305 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:21.305 ************************************ 00:23:21.305 END TEST nvmf_nmic 00:23:21.305 ************************************ 00:23:21.305 10:34:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:21.305 10:34:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.305 10:34:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.305 10:34:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:23:21.564 ************************************ 00:23:21.564 START TEST nvmf_fio_target 00:23:21.564 ************************************ 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:21.564 * Looking for test storage... 00:23:21.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.564 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.565 --rc genhtml_branch_coverage=1 00:23:21.565 --rc genhtml_function_coverage=1 00:23:21.565 --rc genhtml_legend=1 00:23:21.565 --rc geninfo_all_blocks=1 00:23:21.565 --rc geninfo_unexecuted_blocks=1 00:23:21.565 00:23:21.565 ' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.565 --rc genhtml_branch_coverage=1 00:23:21.565 --rc genhtml_function_coverage=1 00:23:21.565 --rc genhtml_legend=1 00:23:21.565 --rc geninfo_all_blocks=1 00:23:21.565 --rc geninfo_unexecuted_blocks=1 00:23:21.565 00:23:21.565 ' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.565 --rc genhtml_branch_coverage=1 00:23:21.565 --rc genhtml_function_coverage=1 00:23:21.565 --rc genhtml_legend=1 00:23:21.565 --rc geninfo_all_blocks=1 00:23:21.565 --rc geninfo_unexecuted_blocks=1 00:23:21.565 00:23:21.565 ' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.565 --rc genhtml_branch_coverage=1 00:23:21.565 --rc genhtml_function_coverage=1 00:23:21.565 --rc genhtml_legend=1 00:23:21.565 --rc geninfo_all_blocks=1 00:23:21.565 --rc geninfo_unexecuted_blocks=1 00:23:21.565 00:23:21.565 ' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.565 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.566 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.566 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.566 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:21.566 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:21.566 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.566 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.832 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:26.833 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:26.833 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:26.833 Found net devices under 0000:86:00.0: cvl_0_0 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:26.833 Found net devices under 0000:86:00.1: cvl_0_1 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:26.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:23:26.833 00:23:26.833 --- 10.0.0.2 ping statistics --- 00:23:26.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.833 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:23:26.833 00:23:26.833 --- 10.0.0.1 ping statistics --- 00:23:26.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.833 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=549350 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 549350 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 549350 ']' 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.833 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.092 [2024-12-09 10:34:28.013752] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:23:27.092 [2024-12-09 10:34:28.013798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.092 [2024-12-09 10:34:28.084686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.092 [2024-12-09 10:34:28.127424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.092 [2024-12-09 10:34:28.127462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.092 [2024-12-09 10:34:28.127469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.092 [2024-12-09 10:34:28.127475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.092 [2024-12-09 10:34:28.127480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.092 [2024-12-09 10:34:28.129082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.092 [2024-12-09 10:34:28.129101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.092 [2024-12-09 10:34:28.129202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.092 [2024-12-09 10:34:28.129220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.092 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.092 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:23:27.092 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.092 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.092 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.388 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.388 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:27.388 [2024-12-09 10:34:28.436453] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.388 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:27.646 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:23:27.646 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:27.903 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:23:27.903 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:28.161 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:23:28.162 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:28.162 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:23:28.162 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:23:28.420 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:28.678 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:23:28.678 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:28.936 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:23:28.936 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:29.194 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:23:29.194 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:23:29.194 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:29.451 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:29.451 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:29.710 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:29.710 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:29.969 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.969 [2024-12-09 10:34:31.129912] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.228 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:23:30.228 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:23:30.487 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:31.865 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:23:31.865 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:23:31.865 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:31.865 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:23:31.865 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:23:31.865 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:23:33.771 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:33.771 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:33.771 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:23:33.771 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:23:33.771 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:33.771 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:23:33.771 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:33.771 [global] 00:23:33.771 thread=1 00:23:33.771 invalidate=1 00:23:33.771 rw=write 00:23:33.771 time_based=1 00:23:33.771 runtime=1 00:23:33.771 ioengine=libaio 00:23:33.771 direct=1 00:23:33.771 bs=4096 00:23:33.771 iodepth=1 00:23:33.771 norandommap=0 00:23:33.771 numjobs=1 00:23:33.771 00:23:33.771 verify_dump=1 00:23:33.771 verify_backlog=512 00:23:33.771 verify_state_save=0 00:23:33.771 do_verify=1 00:23:33.771 verify=crc32c-intel 00:23:33.771 [job0] 00:23:33.771 filename=/dev/nvme0n1 00:23:33.771 [job1] 00:23:33.771 filename=/dev/nvme0n2 00:23:33.771 [job2] 00:23:33.771 filename=/dev/nvme0n3 00:23:33.771 [job3] 00:23:33.771 filename=/dev/nvme0n4 00:23:33.771 Could not set queue depth (nvme0n1) 00:23:33.771 Could not set queue depth (nvme0n2) 00:23:33.771 Could not set queue depth (nvme0n3) 00:23:33.771 Could not set queue depth (nvme0n4) 00:23:34.030 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:34.030 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:34.030 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:34.030 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:34.030 fio-3.35 00:23:34.030 Starting 4 threads 00:23:35.405 00:23:35.405 job0: (groupid=0, jobs=1): err= 0: pid=550746: Mon Dec 9 10:34:36 2024 00:23:35.405 read: IOPS=300, BW=1203KiB/s (1232kB/s)(1220KiB/1014msec) 00:23:35.405 slat (nsec): min=6630, max=24032, avg=8510.51, stdev=3776.06 00:23:35.405 clat (usec): min=224, max=41941, avg=2944.85, stdev=10106.94 00:23:35.405 lat (usec): min=232, max=41964, avg=2953.36, stdev=10110.43 00:23:35.405 clat percentiles (usec): 00:23:35.405 | 1.00th=[ 231], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 247], 00:23:35.405 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:23:35.405 | 70.00th=[ 273], 80.00th=[ 314], 90.00th=[ 383], 95.00th=[41157], 00:23:35.405 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:23:35.405 | 99.99th=[41681] 00:23:35.405 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:23:35.405 slat (nsec): min=6276, max=87770, avg=12019.89, stdev=4541.34 00:23:35.405 clat (usec): min=150, max=931, avg=203.13, stdev=56.17 00:23:35.405 lat (usec): min=160, max=942, avg=215.15, stdev=56.20 00:23:35.405 clat percentiles (usec): 00:23:35.405 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:23:35.405 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:23:35.405 | 70.00th=[ 206], 80.00th=[ 227], 90.00th=[ 245], 95.00th=[ 258], 00:23:35.405 | 99.00th=[ 330], 99.50th=[ 644], 99.90th=[ 930], 99.95th=[ 930], 00:23:35.405 | 99.99th=[ 930] 00:23:35.405 bw ( KiB/s): min= 4096, max= 4096, per=25.70%, avg=4096.00, stdev= 0.00, samples=1 00:23:35.405 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:35.405 lat (usec) : 250=70.26%, 500=26.81%, 750=0.37%, 1000=0.12% 00:23:35.405 lat (msec) : 50=2.45% 00:23:35.405 cpu : usr=0.20%, sys=0.99%, ctx=820, majf=0, minf=1 00:23:35.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.405 issued rwts: total=305,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.405 job1: (groupid=0, jobs=1): err= 0: pid=550747: Mon Dec 9 10:34:36 2024 00:23:35.405 read: IOPS=2020, BW=8084KiB/s (8278kB/s)(8092KiB/1001msec) 00:23:35.405 slat (nsec): min=5551, max=43655, avg=8470.46, stdev=2076.22 00:23:35.405 clat (usec): min=199, max=516, avg=283.37, stdev=58.59 00:23:35.405 lat (usec): min=206, max=542, avg=291.84, stdev=58.49 00:23:35.405 clat percentiles (usec): 00:23:35.405 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 243], 00:23:35.405 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:23:35.405 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 400], 95.00th=[ 408], 00:23:35.405 | 99.00th=[ 416], 99.50th=[ 420], 99.90th=[ 449], 99.95th=[ 502], 00:23:35.405 | 99.99th=[ 519] 00:23:35.405 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:23:35.405 slat (usec): min=9, max=33216, avg=28.03, stdev=733.72 00:23:35.405 clat (usec): min=130, max=520, avg=166.82, stdev=24.70 00:23:35.405 lat (usec): min=141, max=33430, avg=194.85, stdev=735.22 00:23:35.405 clat percentiles (usec): 00:23:35.405 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:23:35.405 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:23:35.405 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 200], 95.00th=[ 212], 00:23:35.405 | 99.00th=[ 245], 99.50th=[ 265], 99.90th=[ 273], 99.95th=[ 310], 00:23:35.405 | 99.99th=[ 523] 00:23:35.405 bw ( KiB/s): min= 8192, max= 8192, per=51.40%, avg=8192.00, stdev= 0.00, samples=1 00:23:35.405 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:35.405 lat (usec) : 250=63.79%, 500=36.13%, 750=0.07% 00:23:35.405 cpu : usr=2.70%, sys=3.70%, ctx=4074, majf=0, minf=1 00:23:35.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.405 issued rwts: total=2023,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.405 job2: (groupid=0, jobs=1): err= 0: pid=550749: Mon Dec 9 10:34:36 2024 00:23:35.405 read: IOPS=513, BW=2054KiB/s (2104kB/s)(2112KiB/1028msec) 00:23:35.405 slat (nsec): min=6043, max=27429, avg=8065.69, stdev=2795.92 00:23:35.405 clat (usec): min=254, max=41249, avg=1520.11, stdev=6983.92 00:23:35.405 lat (usec): min=261, max=41263, avg=1528.18, stdev=6986.41 00:23:35.405 clat percentiles (usec): 00:23:35.405 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:23:35.405 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:23:35.405 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 326], 00:23:35.405 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:23:35.405 | 99.99th=[41157] 00:23:35.405 write: IOPS=996, BW=3984KiB/s (4080kB/s)(4096KiB/1028msec); 0 zone resets 00:23:35.405 slat (nsec): min=9016, max=39150, avg=12028.24, stdev=2088.61 00:23:35.405 clat (usec): min=140, max=451, avg=199.58, stdev=31.53 00:23:35.405 lat (usec): min=154, max=462, avg=211.61, stdev=31.83 00:23:35.405 clat percentiles (usec): 00:23:35.405 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 174], 00:23:35.405 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 198], 60.00th=[ 204], 00:23:35.405 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 255], 00:23:35.405 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 420], 99.95th=[ 453], 00:23:35.405 | 99.99th=[ 453] 00:23:35.405 bw ( KiB/s): min= 8192, max= 8192, per=51.40%, avg=8192.00, stdev= 0.00, samples=1 00:23:35.405 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:35.405 lat (usec) : 250=62.05%, 500=36.86%, 750=0.06% 00:23:35.405 lat (msec) : 50=1.03% 00:23:35.405 cpu : usr=0.78%, sys=1.56%, ctx=1553, majf=0, minf=1 00:23:35.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.405 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.405 job3: (groupid=0, jobs=1): err= 0: pid=550750: Mon Dec 9 10:34:36 2024 00:23:35.405 read: IOPS=22, BW=91.8KiB/s (94.0kB/s)(92.0KiB/1002msec) 00:23:35.405 slat (nsec): min=8687, max=24139, avg=21495.39, stdev=4035.51 00:23:35.405 clat (usec): min=461, max=41986, avg=37785.07, stdev=10841.30 00:23:35.405 lat (usec): min=485, max=42008, avg=37806.57, stdev=10841.02 00:23:35.405 clat percentiles (usec): 00:23:35.405 | 1.00th=[ 461], 5.00th=[ 6652], 10.00th=[40633], 20.00th=[41157], 00:23:35.405 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:23:35.405 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:35.405 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:35.405 | 99.99th=[42206] 00:23:35.405 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:23:35.405 slat (nsec): min=8349, max=49309, avg=12409.35, stdev=4831.04 00:23:35.405 clat (usec): min=148, max=578, avg=240.99, stdev=53.01 00:23:35.405 lat (usec): min=158, max=591, avg=253.40, stdev=53.44 00:23:35.405 clat percentiles (usec): 00:23:35.405 | 1.00th=[ 155], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 200], 00:23:35.405 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 243], 00:23:35.405 | 70.00th=[ 260], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 322], 00:23:35.405 | 99.00th=[ 404], 99.50th=[ 474], 99.90th=[ 578], 99.95th=[ 578], 00:23:35.405 | 99.99th=[ 578] 00:23:35.405 bw ( KiB/s): min= 4096, max= 4096, per=25.70%, avg=4096.00, stdev= 0.00, samples=1 00:23:35.405 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:35.405 lat (usec) : 250=62.43%, 500=33.27%, 750=0.19% 00:23:35.405 lat (msec) : 10=0.19%, 50=3.93% 00:23:35.405 cpu : usr=0.30%, sys=0.60%, ctx=538, majf=0, minf=1 00:23:35.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.405 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.405 00:23:35.405 Run status group 0 (all jobs): 00:23:35.405 READ: bw=10.9MiB/s (11.5MB/s), 91.8KiB/s-8084KiB/s (94.0kB/s-8278kB/s), io=11.2MiB (11.8MB), run=1001-1028msec 00:23:35.405 WRITE: bw=15.6MiB/s (16.3MB/s), 2020KiB/s-8184KiB/s (2068kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1028msec 00:23:35.405 00:23:35.405 Disk stats (read/write): 00:23:35.405 nvme0n1: ios=350/512, merge=0/0, ticks=1046/101, in_queue=1147, util=85.97% 00:23:35.405 nvme0n2: ios=1586/1973, merge=0/0, ticks=745/322, in_queue=1067, util=90.75% 00:23:35.405 nvme0n3: ios=580/1024, merge=0/0, ticks=742/199, in_queue=941, util=93.65% 00:23:35.405 nvme0n4: ios=75/512, merge=0/0, ticks=772/121, in_queue=893, util=95.38% 00:23:35.406 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:23:35.406 [global] 00:23:35.406 thread=1 00:23:35.406 invalidate=1 00:23:35.406 rw=randwrite 00:23:35.406 time_based=1 00:23:35.406 runtime=1 00:23:35.406 ioengine=libaio 00:23:35.406 direct=1 00:23:35.406 bs=4096 00:23:35.406 iodepth=1 00:23:35.406 norandommap=0 00:23:35.406 numjobs=1 00:23:35.406 00:23:35.406 verify_dump=1 00:23:35.406 verify_backlog=512 00:23:35.406 verify_state_save=0 00:23:35.406 do_verify=1 00:23:35.406 verify=crc32c-intel 00:23:35.406 [job0] 00:23:35.406 filename=/dev/nvme0n1 00:23:35.406 [job1] 00:23:35.406 filename=/dev/nvme0n2 00:23:35.406 [job2] 00:23:35.406 filename=/dev/nvme0n3 00:23:35.406 [job3] 00:23:35.406 filename=/dev/nvme0n4 00:23:35.406 Could not set queue depth (nvme0n1) 00:23:35.406 Could not set queue depth (nvme0n2) 00:23:35.406 Could not set queue depth (nvme0n3) 00:23:35.406 Could not set queue depth (nvme0n4) 00:23:35.664 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:35.664 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:35.664 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:35.664 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:35.664 fio-3.35 00:23:35.664 Starting 4 threads 00:23:37.043 00:23:37.043 job0: (groupid=0, jobs=1): err= 0: pid=551118: Mon Dec 9 10:34:37 2024 00:23:37.043 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:23:37.043 slat (nsec): min=3245, max=24473, avg=8406.54, stdev=1993.67 00:23:37.043 clat (usec): min=220, max=41988, avg=743.44, stdev=4389.73 00:23:37.043 lat (usec): min=227, max=42010, avg=751.85, stdev=4391.04 00:23:37.043 clat percentiles (usec): 00:23:37.043 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:23:37.043 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:23:37.043 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 306], 00:23:37.043 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:23:37.043 | 99.99th=[42206] 00:23:37.043 write: IOPS=1076, BW=4308KiB/s (4411kB/s)(4312KiB/1001msec); 0 zone resets 00:23:37.043 slat (nsec): min=3064, max=37911, avg=7714.01, stdev=4153.70 00:23:37.043 clat (usec): min=117, max=903, avg=200.62, stdev=45.19 00:23:37.043 lat (usec): min=120, max=908, avg=208.33, stdev=45.60 00:23:37.043 clat percentiles (usec): 00:23:37.043 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 174], 00:23:37.043 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:23:37.043 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 239], 95.00th=[ 277], 00:23:37.043 | 99.00th=[ 351], 99.50th=[ 375], 99.90th=[ 619], 99.95th=[ 906], 00:23:37.043 | 99.99th=[ 906] 00:23:37.043 bw ( KiB/s): min= 4096, max= 4096, per=17.75%, avg=4096.00, stdev= 0.00, samples=1 00:23:37.043 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:37.043 lat (usec) : 250=55.80%, 500=43.48%, 750=0.10%, 1000=0.05% 00:23:37.043 lat (msec) : 50=0.57% 00:23:37.043 cpu : usr=1.30%, sys=3.00%, ctx=2102, majf=0, minf=1 00:23:37.043 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.043 issued rwts: total=1024,1078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.043 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:37.043 job1: (groupid=0, jobs=1): err= 0: pid=551119: Mon Dec 9 10:34:37 2024 00:23:37.043 read: IOPS=2025, BW=8100KiB/s (8294kB/s)(8100KiB/1000msec) 00:23:37.043 slat (nsec): min=6187, max=29899, avg=7144.36, stdev=1131.56 00:23:37.043 clat (usec): min=207, max=1039, avg=286.02, stdev=52.68 00:23:37.043 lat (usec): min=214, max=1045, avg=293.16, stdev=52.73 00:23:37.043 clat percentiles (usec): 00:23:37.043 | 1.00th=[ 225], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:23:37.043 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:23:37.043 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 343], 95.00th=[ 433], 00:23:37.043 | 99.00th=[ 453], 99.50th=[ 457], 99.90th=[ 465], 99.95th=[ 465], 00:23:37.043 | 99.99th=[ 1037] 00:23:37.043 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:23:37.043 slat (nsec): min=4778, max=45491, avg=9947.91, stdev=2161.19 00:23:37.043 clat (usec): min=131, max=956, avg=184.12, stdev=35.24 00:23:37.043 lat (usec): min=141, max=968, avg=194.07, stdev=35.50 00:23:37.043 clat percentiles (usec): 00:23:37.043 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:23:37.043 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 186], 00:23:37.043 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 239], 00:23:37.043 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 424], 99.95th=[ 578], 00:23:37.043 | 99.99th=[ 955] 00:23:37.043 bw ( KiB/s): min= 8192, max= 8192, per=35.49%, avg=8192.00, stdev= 0.00, samples=1 00:23:37.043 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:37.043 lat (usec) : 250=52.66%, 500=47.26%, 750=0.02%, 1000=0.02% 00:23:37.043 lat (msec) : 2=0.02% 00:23:37.043 cpu : usr=1.90%, sys=3.60%, ctx=4073, majf=0, minf=1 00:23:37.043 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.044 issued rwts: total=2025,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:37.044 job2: (groupid=0, jobs=1): err= 0: pid=551120: Mon Dec 9 10:34:37 2024 00:23:37.044 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:23:37.044 slat (nsec): min=6889, max=27685, avg=8668.50, stdev=1521.85 00:23:37.044 clat (usec): min=207, max=4960, avg=265.43, stdev=106.11 00:23:37.044 lat (usec): min=215, max=4972, avg=274.10, stdev=106.29 00:23:37.044 clat percentiles (usec): 00:23:37.044 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:23:37.044 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:23:37.044 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 306], 00:23:37.044 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 375], 99.95th=[ 412], 00:23:37.044 | 99.99th=[ 4948] 00:23:37.044 write: IOPS=2135, BW=8543KiB/s (8748kB/s)(8552KiB/1001msec); 0 zone resets 00:23:37.044 slat (nsec): min=10261, max=46009, avg=11782.94, stdev=1554.52 00:23:37.044 clat (usec): min=131, max=837, avg=187.47, stdev=29.91 00:23:37.044 lat (usec): min=143, max=850, avg=199.25, stdev=30.05 00:23:37.044 clat percentiles (usec): 00:23:37.044 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 00:23:37.044 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:23:37.044 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 227], 00:23:37.044 | 99.00th=[ 285], 99.50th=[ 326], 99.90th=[ 545], 99.95th=[ 578], 00:23:37.044 | 99.99th=[ 840] 00:23:37.044 bw ( KiB/s): min= 8968, max= 8968, per=38.85%, avg=8968.00, stdev= 0.00, samples=1 00:23:37.044 iops : min= 2242, max= 2242, avg=2242.00, stdev= 0.00, samples=1 00:23:37.044 lat (usec) : 250=62.80%, 500=37.10%, 750=0.05%, 1000=0.02% 00:23:37.044 lat (msec) : 10=0.02% 00:23:37.044 cpu : usr=3.00%, sys=5.30%, ctx=4187, majf=0, minf=1 00:23:37.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.044 issued rwts: total=2048,2138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:37.044 job3: (groupid=0, jobs=1): err= 0: pid=551121: Mon Dec 9 10:34:37 2024 00:23:37.044 read: IOPS=103, BW=416KiB/s (426kB/s)(416KiB/1001msec) 00:23:37.044 slat (nsec): min=8753, max=26739, avg=13115.93, stdev=5520.69 00:23:37.044 clat (usec): min=263, max=41094, avg=8511.41, stdev=16395.93 00:23:37.044 lat (usec): min=272, max=41117, avg=8524.53, stdev=16400.92 00:23:37.044 clat percentiles (usec): 00:23:37.044 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 289], 00:23:37.044 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:23:37.044 | 70.00th=[ 343], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:23:37.044 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:23:37.044 | 99.99th=[41157] 00:23:37.044 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:23:37.044 slat (nsec): min=12142, max=38478, avg=14723.74, stdev=2996.28 00:23:37.044 clat (usec): min=157, max=1102, avg=202.80, stdev=56.77 00:23:37.044 lat (usec): min=171, max=1116, avg=217.53, stdev=57.04 00:23:37.044 clat percentiles (usec): 00:23:37.044 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:23:37.044 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:23:37.044 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 249], 00:23:37.044 | 99.00th=[ 334], 99.50th=[ 619], 99.90th=[ 1106], 99.95th=[ 1106], 00:23:37.044 | 99.99th=[ 1106] 00:23:37.044 bw ( KiB/s): min= 4096, max= 4096, per=17.75%, avg=4096.00, stdev= 0.00, samples=1 00:23:37.044 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:37.044 lat (usec) : 250=79.06%, 500=16.88%, 750=0.49% 00:23:37.044 lat (msec) : 2=0.16%, 50=3.41% 00:23:37.044 cpu : usr=0.40%, sys=1.40%, ctx=617, majf=0, minf=1 00:23:37.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.044 issued rwts: total=104,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:37.044 00:23:37.044 Run status group 0 (all jobs): 00:23:37.044 READ: bw=20.3MiB/s (21.3MB/s), 416KiB/s-8184KiB/s (426kB/s-8380kB/s), io=20.3MiB (21.3MB), run=1000-1001msec 00:23:37.044 WRITE: bw=22.5MiB/s (23.6MB/s), 2046KiB/s-8543KiB/s (2095kB/s-8748kB/s), io=22.6MiB (23.7MB), run=1000-1001msec 00:23:37.044 00:23:37.044 Disk stats (read/write): 00:23:37.044 nvme0n1: ios=612/1024, merge=0/0, ticks=651/185, in_queue=836, util=87.17% 00:23:37.044 nvme0n2: ios=1601/2048, merge=0/0, ticks=496/367, in_queue=863, util=91.38% 00:23:37.044 nvme0n3: ios=1643/2048, merge=0/0, ticks=588/377, in_queue=965, util=99.17% 00:23:37.044 nvme0n4: ios=69/512, merge=0/0, ticks=900/100, in_queue=1000, util=99.16% 00:23:37.044 10:34:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:23:37.044 [global] 00:23:37.044 thread=1 00:23:37.044 invalidate=1 00:23:37.044 rw=write 00:23:37.044 time_based=1 00:23:37.044 runtime=1 00:23:37.044 ioengine=libaio 00:23:37.044 direct=1 00:23:37.044 bs=4096 00:23:37.044 iodepth=128 00:23:37.044 norandommap=0 00:23:37.044 numjobs=1 00:23:37.044 00:23:37.044 verify_dump=1 00:23:37.044 verify_backlog=512 00:23:37.044 verify_state_save=0 00:23:37.044 do_verify=1 00:23:37.044 verify=crc32c-intel 00:23:37.044 [job0] 00:23:37.044 filename=/dev/nvme0n1 00:23:37.044 [job1] 00:23:37.044 filename=/dev/nvme0n2 00:23:37.044 [job2] 00:23:37.044 filename=/dev/nvme0n3 00:23:37.044 [job3] 00:23:37.044 filename=/dev/nvme0n4 00:23:37.044 Could not set queue depth (nvme0n1) 00:23:37.044 Could not set queue depth (nvme0n2) 00:23:37.044 Could not set queue depth (nvme0n3) 00:23:37.044 Could not set queue depth (nvme0n4) 00:23:37.302 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:37.302 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:37.302 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:37.302 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:37.302 fio-3.35 00:23:37.302 Starting 4 threads 00:23:38.672 00:23:38.672 job0: (groupid=0, jobs=1): err= 0: pid=551495: Mon Dec 9 10:34:39 2024 00:23:38.672 read: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec) 00:23:38.672 slat (nsec): min=1408, max=9457.6k, avg=86838.45, stdev=636135.69 00:23:38.672 clat (usec): min=2341, max=20345, avg=10912.71, stdev=2421.26 00:23:38.672 lat (usec): min=3480, max=23644, avg=10999.55, stdev=2468.05 00:23:38.672 clat percentiles (usec): 00:23:38.672 | 1.00th=[ 4621], 5.00th=[ 8160], 10.00th=[ 9241], 20.00th=[ 9634], 00:23:38.672 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10552], 00:23:38.672 | 70.00th=[10814], 80.00th=[11994], 90.00th=[14615], 95.00th=[16319], 00:23:38.672 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19792], 99.95th=[19792], 00:23:38.672 | 99.99th=[20317] 00:23:38.672 write: IOPS=6420, BW=25.1MiB/s (26.3MB/s)(25.3MiB/1009msec); 0 zone resets 00:23:38.672 slat (usec): min=2, max=8177, avg=66.35, stdev=394.64 00:23:38.672 clat (usec): min=1756, max=19264, avg=9403.07, stdev=1993.76 00:23:38.672 lat (usec): min=1772, max=19268, avg=9469.43, stdev=2032.63 00:23:38.672 clat percentiles (usec): 00:23:38.672 | 1.00th=[ 3032], 5.00th=[ 5080], 10.00th=[ 6915], 20.00th=[ 8455], 00:23:38.672 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:23:38.672 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[10814], 00:23:38.672 | 99.00th=[15533], 99.50th=[17433], 99.90th=[18482], 99.95th=[18744], 00:23:38.672 | 99.99th=[19268] 00:23:38.672 bw ( KiB/s): min=25288, max=25520, per=34.66%, avg=25404.00, stdev=164.05, samples=2 00:23:38.672 iops : min= 6322, max= 6380, avg=6351.00, stdev=41.01, samples=2 00:23:38.672 lat (msec) : 2=0.02%, 4=1.48%, 10=46.06%, 20=52.43%, 50=0.01% 00:23:38.672 cpu : usr=4.96%, sys=7.14%, ctx=663, majf=0, minf=1 00:23:38.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:38.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:38.672 issued rwts: total=6144,6478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:38.672 job1: (groupid=0, jobs=1): err= 0: pid=551496: Mon Dec 9 10:34:39 2024 00:23:38.672 read: IOPS=5986, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1004msec) 00:23:38.672 slat (nsec): min=1294, max=10187k, avg=93212.74, stdev=668774.22 00:23:38.672 clat (usec): min=2081, max=20971, avg=11353.14, stdev=2691.84 00:23:38.672 lat (usec): min=3563, max=20993, avg=11446.36, stdev=2729.93 00:23:38.672 clat percentiles (usec): 00:23:38.672 | 1.00th=[ 4359], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[ 9896], 00:23:38.672 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:23:38.672 | 70.00th=[11338], 80.00th=[12649], 90.00th=[15795], 95.00th=[17433], 00:23:38.672 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:23:38.672 | 99.99th=[20841] 00:23:38.672 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:23:38.672 slat (usec): min=2, max=8485, avg=66.72, stdev=270.30 00:23:38.672 clat (usec): min=1550, max=20110, avg=9617.62, stdev=2185.56 00:23:38.672 lat (usec): min=1565, max=20113, avg=9684.34, stdev=2209.91 00:23:38.672 clat percentiles (usec): 00:23:38.672 | 1.00th=[ 2474], 5.00th=[ 4293], 10.00th=[ 6128], 20.00th=[ 8717], 00:23:38.672 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10552], 00:23:38.672 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11076], 00:23:38.672 | 99.00th=[11338], 99.50th=[13960], 99.90th=[19530], 99.95th=[19530], 00:23:38.672 | 99.99th=[20055] 00:23:38.672 bw ( KiB/s): min=24576, max=24576, per=33.53%, avg=24576.00, stdev= 0.00, samples=2 00:23:38.672 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:23:38.672 lat (msec) : 2=0.19%, 4=2.20%, 10=24.32%, 20=73.21%, 50=0.08% 00:23:38.672 cpu : usr=4.49%, sys=5.58%, ctx=807, majf=0, minf=2 00:23:38.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:38.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:38.672 issued rwts: total=6010,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:38.672 job2: (groupid=0, jobs=1): err= 0: pid=551497: Mon Dec 9 10:34:39 2024 00:23:38.672 read: IOPS=3256, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1010msec) 00:23:38.672 slat (nsec): min=1762, max=27308k, avg=158931.88, stdev=1160157.13 00:23:38.672 clat (usec): min=4403, max=64977, avg=17014.30, stdev=8709.07 00:23:38.672 lat (usec): min=6327, max=64983, avg=17173.23, stdev=8873.91 00:23:38.672 clat percentiles (usec): 00:23:38.672 | 1.00th=[ 7373], 5.00th=[11207], 10.00th=[11469], 20.00th=[12387], 00:23:38.672 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13435], 60.00th=[14222], 00:23:38.672 | 70.00th=[15401], 80.00th=[20579], 90.00th=[28705], 95.00th=[33817], 00:23:38.672 | 99.00th=[53216], 99.50th=[55313], 99.90th=[64750], 99.95th=[64750], 00:23:38.672 | 99.99th=[64750] 00:23:38.672 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:23:38.672 slat (usec): min=2, max=12903, avg=127.19, stdev=794.28 00:23:38.672 clat (usec): min=1507, max=64971, avg=19973.38, stdev=14288.62 00:23:38.672 lat (usec): min=1519, max=64976, avg=20100.57, stdev=14356.73 00:23:38.672 clat percentiles (usec): 00:23:38.672 | 1.00th=[ 4686], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10945], 00:23:38.672 | 30.00th=[11338], 40.00th=[12125], 50.00th=[15533], 60.00th=[17433], 00:23:38.672 | 70.00th=[22414], 80.00th=[23200], 90.00th=[41157], 95.00th=[62653], 00:23:38.672 | 99.00th=[64226], 99.50th=[64226], 99.90th=[64226], 99.95th=[64750], 00:23:38.672 | 99.99th=[64750] 00:23:38.672 bw ( KiB/s): min=12816, max=15856, per=19.56%, avg=14336.00, stdev=2149.60, samples=2 00:23:38.672 iops : min= 3204, max= 3964, avg=3584.00, stdev=537.40, samples=2 00:23:38.672 lat (msec) : 2=0.03%, 4=0.26%, 10=5.44%, 20=64.21%, 50=24.63% 00:23:38.672 lat (msec) : 100=5.43% 00:23:38.672 cpu : usr=3.37%, sys=4.06%, ctx=275, majf=0, minf=1 00:23:38.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:38.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:38.672 issued rwts: total=3289,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:38.672 job3: (groupid=0, jobs=1): err= 0: pid=551498: Mon Dec 9 10:34:39 2024 00:23:38.672 read: IOPS=2019, BW=8079KiB/s (8273kB/s)(8192KiB/1014msec) 00:23:38.673 slat (nsec): min=1501, max=16493k, avg=155324.51, stdev=1115890.52 00:23:38.673 clat (usec): min=6111, max=49706, avg=18632.02, stdev=7864.29 00:23:38.673 lat (usec): min=6119, max=49713, avg=18787.35, stdev=7947.51 00:23:38.673 clat percentiles (usec): 00:23:38.673 | 1.00th=[ 6390], 5.00th=[12256], 10.00th=[12518], 20.00th=[12780], 00:23:38.673 | 30.00th=[13173], 40.00th=[13435], 50.00th=[14091], 60.00th=[17957], 00:23:38.673 | 70.00th=[21103], 80.00th=[27395], 90.00th=[32900], 95.00th=[33424], 00:23:38.673 | 99.00th=[33817], 99.50th=[34341], 99.90th=[49546], 99.95th=[49546], 00:23:38.673 | 99.99th=[49546] 00:23:38.673 write: IOPS=2341, BW=9365KiB/s (9590kB/s)(9496KiB/1014msec); 0 zone resets 00:23:38.673 slat (usec): min=2, max=41062, avg=282.78, stdev=1632.25 00:23:38.673 clat (msec): min=3, max=118, avg=34.43, stdev=26.43 00:23:38.673 lat (msec): min=3, max=118, avg=34.71, stdev=26.58 00:23:38.673 clat percentiles (msec): 00:23:38.673 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 13], 20.00th=[ 16], 00:23:38.673 | 30.00th=[ 21], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:23:38.673 | 70.00th=[ 37], 80.00th=[ 58], 90.00th=[ 73], 95.00th=[ 99], 00:23:38.673 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 118], 99.95th=[ 118], 00:23:38.673 | 99.99th=[ 118] 00:23:38.673 bw ( KiB/s): min= 8240, max= 9728, per=12.26%, avg=8984.00, stdev=1052.17, samples=2 00:23:38.673 iops : min= 2060, max= 2432, avg=2246.00, stdev=263.04, samples=2 00:23:38.673 lat (msec) : 4=0.27%, 10=4.27%, 20=40.91%, 50=42.29%, 100=9.61% 00:23:38.673 lat (msec) : 250=2.65% 00:23:38.673 cpu : usr=2.17%, sys=2.07%, ctx=265, majf=0, minf=1 00:23:38.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:38.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:38.673 issued rwts: total=2048,2374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:38.673 00:23:38.673 Run status group 0 (all jobs): 00:23:38.673 READ: bw=67.4MiB/s (70.7MB/s), 8079KiB/s-23.8MiB/s (8273kB/s-24.9MB/s), io=68.3MiB (71.6MB), run=1004-1014msec 00:23:38.673 WRITE: bw=71.6MiB/s (75.1MB/s), 9365KiB/s-25.1MiB/s (9590kB/s-26.3MB/s), io=72.6MiB (76.1MB), run=1004-1014msec 00:23:38.673 00:23:38.673 Disk stats (read/write): 00:23:38.673 nvme0n1: ios=5169/5575, merge=0/0, ticks=54821/50855, in_queue=105676, util=90.08% 00:23:38.673 nvme0n2: ios=5170/5263, merge=0/0, ticks=55858/49389, in_queue=105247, util=90.87% 00:23:38.673 nvme0n3: ios=2607/2975, merge=0/0, ticks=43534/58422, in_queue=101956, util=96.57% 00:23:38.673 nvme0n4: ios=1813/2048, merge=0/0, ticks=35118/62960, in_queue=98078, util=99.69% 00:23:38.673 10:34:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:23:38.673 [global] 00:23:38.673 thread=1 00:23:38.673 invalidate=1 00:23:38.673 rw=randwrite 00:23:38.673 time_based=1 00:23:38.673 runtime=1 00:23:38.673 ioengine=libaio 00:23:38.673 direct=1 00:23:38.673 bs=4096 00:23:38.673 iodepth=128 00:23:38.673 norandommap=0 00:23:38.673 numjobs=1 00:23:38.673 00:23:38.673 verify_dump=1 00:23:38.673 verify_backlog=512 00:23:38.673 verify_state_save=0 00:23:38.673 do_verify=1 00:23:38.673 verify=crc32c-intel 00:23:38.673 [job0] 00:23:38.673 filename=/dev/nvme0n1 00:23:38.673 [job1] 00:23:38.673 filename=/dev/nvme0n2 00:23:38.673 [job2] 00:23:38.673 filename=/dev/nvme0n3 00:23:38.673 [job3] 00:23:38.673 filename=/dev/nvme0n4 00:23:38.673 Could not set queue depth (nvme0n1) 00:23:38.673 Could not set queue depth (nvme0n2) 00:23:38.673 Could not set queue depth (nvme0n3) 00:23:38.673 Could not set queue depth (nvme0n4) 00:23:38.673 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:38.673 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:38.673 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:38.673 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:38.673 fio-3.35 00:23:38.673 Starting 4 threads 00:23:40.046 00:23:40.046 job0: (groupid=0, jobs=1): err= 0: pid=551868: Mon Dec 9 10:34:40 2024 00:23:40.046 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:23:40.046 slat (nsec): min=1747, max=20701k, avg=226201.56, stdev=1445372.27 00:23:40.046 clat (msec): min=5, max=127, avg=23.44, stdev=20.47 00:23:40.046 lat (msec): min=5, max=127, avg=23.66, stdev=20.63 00:23:40.046 clat percentiles (msec): 00:23:40.046 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 12], 00:23:40.046 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 16], 00:23:40.046 | 70.00th=[ 24], 80.00th=[ 33], 90.00th=[ 57], 95.00th=[ 66], 00:23:40.046 | 99.00th=[ 113], 99.50th=[ 126], 99.90th=[ 128], 99.95th=[ 128], 00:23:40.046 | 99.99th=[ 128] 00:23:40.046 write: IOPS=2869, BW=11.2MiB/s (11.8MB/s)(11.4MiB/1014msec); 0 zone resets 00:23:40.046 slat (usec): min=2, max=9823, avg=138.45, stdev=613.54 00:23:40.046 clat (msec): min=2, max=127, avg=23.48, stdev=16.59 00:23:40.046 lat (msec): min=2, max=127, avg=23.62, stdev=16.64 00:23:40.046 clat percentiles (msec): 00:23:40.046 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:23:40.046 | 30.00th=[ 17], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 22], 00:23:40.046 | 70.00th=[ 23], 80.00th=[ 24], 90.00th=[ 47], 95.00th=[ 61], 00:23:40.046 | 99.00th=[ 99], 99.50th=[ 113], 99.90th=[ 117], 99.95th=[ 128], 00:23:40.046 | 99.99th=[ 128] 00:23:40.046 bw ( KiB/s): min= 9976, max=12288, per=15.43%, avg=11132.00, stdev=1634.83, samples=2 00:23:40.046 iops : min= 2494, max= 3072, avg=2783.00, stdev=408.71, samples=2 00:23:40.046 lat (msec) : 4=0.44%, 10=6.98%, 20=41.24%, 50=41.90%, 100=8.14% 00:23:40.046 lat (msec) : 250=1.30% 00:23:40.046 cpu : usr=2.07%, sys=3.55%, ctx=335, majf=0, minf=1 00:23:40.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:40.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:40.046 issued rwts: total=2560,2910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:40.046 job1: (groupid=0, jobs=1): err= 0: pid=551869: Mon Dec 9 10:34:40 2024 00:23:40.046 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:23:40.046 slat (nsec): min=1362, max=4987.8k, avg=80637.45, stdev=440222.98 00:23:40.046 clat (usec): min=6014, max=15073, avg=10105.33, stdev=1252.14 00:23:40.046 lat (usec): min=6111, max=15264, avg=10185.97, stdev=1288.42 00:23:40.046 clat percentiles (usec): 00:23:40.046 | 1.00th=[ 6849], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[ 9503], 00:23:40.046 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:23:40.046 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11600], 95.00th=[12256], 00:23:40.046 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14222], 99.95th=[14484], 00:23:40.046 | 99.99th=[15008] 00:23:40.046 write: IOPS=6343, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1005msec); 0 zone resets 00:23:40.046 slat (usec): min=2, max=7467, avg=73.35, stdev=403.08 00:23:40.046 clat (usec): min=3945, max=17228, avg=10163.63, stdev=1348.92 00:23:40.046 lat (usec): min=4548, max=17253, avg=10236.98, stdev=1366.15 00:23:40.046 clat percentiles (usec): 00:23:40.046 | 1.00th=[ 5932], 5.00th=[ 7832], 10.00th=[ 8979], 20.00th=[ 9634], 00:23:40.046 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:23:40.046 | 70.00th=[10421], 80.00th=[10552], 90.00th=[11469], 95.00th=[12780], 00:23:40.046 | 99.00th=[14746], 99.50th=[14746], 99.90th=[14877], 99.95th=[14877], 00:23:40.046 | 99.99th=[17171] 00:23:40.046 bw ( KiB/s): min=24576, max=25408, per=34.65%, avg=24992.00, stdev=588.31, samples=2 00:23:40.046 iops : min= 6144, max= 6352, avg=6248.00, stdev=147.08, samples=2 00:23:40.046 lat (msec) : 4=0.01%, 10=42.58%, 20=57.42% 00:23:40.046 cpu : usr=3.39%, sys=7.67%, ctx=614, majf=0, minf=1 00:23:40.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:40.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:40.046 issued rwts: total=6144,6375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:40.046 job2: (groupid=0, jobs=1): err= 0: pid=551870: Mon Dec 9 10:34:40 2024 00:23:40.046 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:23:40.046 slat (nsec): min=1368, max=14638k, avg=134525.11, stdev=938312.43 00:23:40.046 clat (usec): min=5254, max=32431, avg=16083.16, stdev=4992.42 00:23:40.046 lat (usec): min=5263, max=32434, avg=16217.69, stdev=5067.98 00:23:40.046 clat percentiles (usec): 00:23:40.046 | 1.00th=[ 8291], 5.00th=[11076], 10.00th=[11994], 20.00th=[12387], 00:23:40.046 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[16319], 00:23:40.046 | 70.00th=[18744], 80.00th=[20841], 90.00th=[21627], 95.00th=[26608], 00:23:40.047 | 99.00th=[31065], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:23:40.047 | 99.99th=[32375] 00:23:40.047 write: IOPS=3319, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1014msec); 0 zone resets 00:23:40.047 slat (usec): min=2, max=15321, avg=169.06, stdev=916.75 00:23:40.047 clat (usec): min=1527, max=96610, avg=23516.32, stdev=14839.23 00:23:40.047 lat (usec): min=1540, max=96621, avg=23685.38, stdev=14912.14 00:23:40.047 clat percentiles (usec): 00:23:40.047 | 1.00th=[ 3949], 5.00th=[ 8717], 10.00th=[11731], 20.00th=[13829], 00:23:40.047 | 30.00th=[17433], 40.00th=[20317], 50.00th=[21627], 60.00th=[21890], 00:23:40.047 | 70.00th=[22152], 80.00th=[25560], 90.00th=[40109], 95.00th=[56361], 00:23:40.047 | 99.00th=[90702], 99.50th=[94897], 99.90th=[96994], 99.95th=[96994], 00:23:40.047 | 99.99th=[96994] 00:23:40.047 bw ( KiB/s): min=12288, max=13624, per=17.96%, avg=12956.00, stdev=944.69, samples=2 00:23:40.047 iops : min= 3072, max= 3406, avg=3239.00, stdev=236.17, samples=2 00:23:40.047 lat (msec) : 2=0.30%, 4=0.31%, 10=3.77%, 20=49.92%, 50=42.19% 00:23:40.047 lat (msec) : 100=3.51% 00:23:40.047 cpu : usr=2.17%, sys=4.05%, ctx=348, majf=0, minf=2 00:23:40.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:40.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:40.047 issued rwts: total=3072,3366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:40.047 job3: (groupid=0, jobs=1): err= 0: pid=551871: Mon Dec 9 10:34:40 2024 00:23:40.047 read: IOPS=5182, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1003msec) 00:23:40.047 slat (nsec): min=1569, max=5794.5k, avg=92921.37, stdev=524076.87 00:23:40.047 clat (usec): min=2252, max=19839, avg=11809.77, stdev=1731.82 00:23:40.047 lat (usec): min=2258, max=19847, avg=11902.69, stdev=1769.43 00:23:40.047 clat percentiles (usec): 00:23:40.047 | 1.00th=[ 7439], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[11076], 00:23:40.047 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:23:40.047 | 70.00th=[12256], 80.00th=[13173], 90.00th=[13960], 95.00th=[14746], 00:23:40.047 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17171], 99.95th=[17957], 00:23:40.047 | 99.99th=[19792] 00:23:40.047 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:23:40.047 slat (usec): min=2, max=5583, avg=85.51, stdev=491.10 00:23:40.047 clat (usec): min=5498, max=17144, avg=11640.62, stdev=1270.81 00:23:40.047 lat (usec): min=5544, max=17155, avg=11726.13, stdev=1322.17 00:23:40.047 clat percentiles (usec): 00:23:40.047 | 1.00th=[ 7242], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11207], 00:23:40.047 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:23:40.047 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12911], 95.00th=[13304], 00:23:40.047 | 99.00th=[16188], 99.50th=[16581], 99.90th=[16909], 99.95th=[17171], 00:23:40.047 | 99.99th=[17171] 00:23:40.047 bw ( KiB/s): min=22072, max=22592, per=30.96%, avg=22332.00, stdev=367.70, samples=2 00:23:40.047 iops : min= 5518, max= 5648, avg=5583.00, stdev=91.92, samples=2 00:23:40.047 lat (msec) : 4=0.14%, 10=9.95%, 20=89.91% 00:23:40.047 cpu : usr=4.29%, sys=7.29%, ctx=492, majf=0, minf=2 00:23:40.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:40.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:40.047 issued rwts: total=5198,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:40.047 00:23:40.047 Run status group 0 (all jobs): 00:23:40.047 READ: bw=65.4MiB/s (68.6MB/s), 9.86MiB/s-23.9MiB/s (10.3MB/s-25.0MB/s), io=66.3MiB (69.5MB), run=1003-1014msec 00:23:40.047 WRITE: bw=70.4MiB/s (73.9MB/s), 11.2MiB/s-24.8MiB/s (11.8MB/s-26.0MB/s), io=71.4MiB (74.9MB), run=1003-1014msec 00:23:40.047 00:23:40.047 Disk stats (read/write): 00:23:40.047 nvme0n1: ios=2200/2560, merge=0/0, ticks=48641/56617, in_queue=105258, util=89.98% 00:23:40.047 nvme0n2: ios=5154/5543, merge=0/0, ticks=26379/25254, in_queue=51633, util=96.65% 00:23:40.047 nvme0n3: ios=2594/2567, merge=0/0, ticks=41083/64150, in_queue=105233, util=91.06% 00:23:40.047 nvme0n4: ios=4638/4615, merge=0/0, ticks=27467/24790, in_queue=52257, util=99.79% 00:23:40.047 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:23:40.047 10:34:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=552109 00:23:40.047 10:34:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:23:40.047 10:34:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:23:40.047 [global] 00:23:40.047 thread=1 00:23:40.047 invalidate=1 00:23:40.047 rw=read 00:23:40.047 time_based=1 00:23:40.047 runtime=10 00:23:40.047 ioengine=libaio 00:23:40.047 direct=1 00:23:40.047 bs=4096 00:23:40.047 iodepth=1 00:23:40.047 norandommap=1 00:23:40.047 numjobs=1 00:23:40.047 00:23:40.047 [job0] 00:23:40.047 filename=/dev/nvme0n1 00:23:40.047 [job1] 00:23:40.047 filename=/dev/nvme0n2 00:23:40.047 [job2] 00:23:40.047 filename=/dev/nvme0n3 00:23:40.047 [job3] 00:23:40.047 filename=/dev/nvme0n4 00:23:40.047 Could not set queue depth (nvme0n1) 00:23:40.047 Could not set queue depth (nvme0n2) 00:23:40.047 Could not set queue depth (nvme0n3) 00:23:40.047 Could not set queue depth (nvme0n4) 00:23:40.305 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:40.305 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:40.305 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:40.305 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:40.305 fio-3.35 00:23:40.305 Starting 4 threads 00:23:43.584 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:23:43.584 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:23:43.584 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=282624, buflen=4096 00:23:43.584 fio: pid=552248, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:23:43.584 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:43.584 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:23:43.584 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:23:43.584 fio: pid=552247, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:23:43.584 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10350592, buflen=4096 00:23:43.584 fio: pid=552245, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:23:43.584 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:43.584 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:23:43.842 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=778240, buflen=4096 00:23:43.842 fio: pid=552246, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:23:43.842 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:43.842 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:23:43.842 00:23:43.842 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=552245: Mon Dec 9 10:34:44 2024 00:23:43.842 read: IOPS=808, BW=3234KiB/s (3311kB/s)(9.87MiB/3126msec) 00:23:43.842 slat (nsec): min=5907, max=65736, avg=7458.10, stdev=3011.89 00:23:43.842 clat (usec): min=196, max=42051, avg=1219.95, stdev=6213.02 00:23:43.842 lat (usec): min=202, max=42073, avg=1227.41, stdev=6215.28 00:23:43.842 clat percentiles (usec): 00:23:43.842 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 237], 00:23:43.842 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:23:43.842 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 293], 00:23:43.842 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:23:43.842 | 99.99th=[42206] 00:23:43.842 bw ( KiB/s): min= 96, max=15256, per=96.05%, avg=3267.33, stdev=5967.10, samples=6 00:23:43.842 iops : min= 24, max= 3814, avg=816.83, stdev=1491.78, samples=6 00:23:43.842 lat (usec) : 250=52.41%, 500=45.02%, 750=0.12% 00:23:43.842 lat (msec) : 2=0.04%, 50=2.37% 00:23:43.842 cpu : usr=0.22%, sys=0.70%, ctx=2529, majf=0, minf=1 00:23:43.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.842 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.842 issued rwts: total=2528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:43.842 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=552246: Mon Dec 9 10:34:44 2024 00:23:43.842 read: IOPS=56, BW=226KiB/s (232kB/s)(760KiB/3361msec) 00:23:43.842 slat (usec): min=6, max=12769, avg=130.60, stdev=1156.03 00:23:43.842 clat (usec): min=241, max=44006, avg=17442.49, stdev=20169.97 00:23:43.842 lat (usec): min=249, max=54017, avg=17573.69, stdev=20244.49 00:23:43.842 clat percentiles (usec): 00:23:43.842 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 258], 20.00th=[ 265], 00:23:43.842 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 347], 60.00th=[40633], 00:23:43.842 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:43.842 | 99.00th=[42206], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:23:43.842 | 99.99th=[43779] 00:23:43.842 bw ( KiB/s): min= 96, max= 681, per=6.47%, avg=220.17, stdev=228.24, samples=6 00:23:43.842 iops : min= 24, max= 170, avg=55.00, stdev=56.96, samples=6 00:23:43.842 lat (usec) : 250=2.62%, 500=54.97% 00:23:43.842 lat (msec) : 50=41.88% 00:23:43.842 cpu : usr=0.12%, sys=0.00%, ctx=194, majf=0, minf=2 00:23:43.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.843 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.843 issued rwts: total=191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:43.843 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=552247: Mon Dec 9 10:34:44 2024 00:23:43.843 read: IOPS=24, BW=97.4KiB/s (99.8kB/s)(288KiB/2956msec) 00:23:43.843 slat (usec): min=10, max=9790, avg=156.82, stdev=1143.23 00:23:43.843 clat (usec): min=441, max=42211, avg=40525.64, stdev=4801.57 00:23:43.843 lat (usec): min=474, max=52002, avg=40684.31, stdev=4983.19 00:23:43.843 clat percentiles (usec): 00:23:43.843 | 1.00th=[ 441], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:23:43.843 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:23:43.843 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:23:43.843 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:43.843 | 99.99th=[42206] 00:23:43.843 bw ( KiB/s): min= 96, max= 104, per=2.85%, avg=97.60, stdev= 3.58, samples=5 00:23:43.843 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:23:43.843 lat (usec) : 500=1.37% 00:23:43.843 lat (msec) : 50=97.26% 00:23:43.843 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=2 00:23:43.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.843 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.843 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:43.843 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=552248: Mon Dec 9 10:34:44 2024 00:23:43.843 read: IOPS=25, BW=101KiB/s (103kB/s)(276KiB/2731msec) 00:23:43.843 slat (nsec): min=10986, max=37074, avg=22539.94, stdev=4474.35 00:23:43.843 clat (usec): min=366, max=41962, avg=39238.58, stdev=8338.13 00:23:43.843 lat (usec): min=387, max=41981, avg=39261.05, stdev=8336.80 00:23:43.843 clat percentiles (usec): 00:23:43.843 | 1.00th=[ 367], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:23:43.843 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:23:43.843 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:43.843 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:43.843 | 99.99th=[42206] 00:23:43.843 bw ( KiB/s): min= 96, max= 112, per=2.94%, avg=100.80, stdev= 7.16, samples=5 00:23:43.843 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:23:43.843 lat (usec) : 500=4.29% 00:23:43.843 lat (msec) : 50=94.29% 00:23:43.843 cpu : usr=0.11%, sys=0.00%, ctx=71, majf=0, minf=2 00:23:43.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.843 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.843 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:43.843 00:23:43.843 Run status group 0 (all jobs): 00:23:43.843 READ: bw=3401KiB/s (3483kB/s), 97.4KiB/s-3234KiB/s (99.8kB/s-3311kB/s), io=11.2MiB (11.7MB), run=2731-3361msec 00:23:43.843 00:23:43.843 Disk stats (read/write): 00:23:43.843 nvme0n1: ios=2527/0, merge=0/0, ticks=3065/0, in_queue=3065, util=95.76% 00:23:43.843 nvme0n2: ios=184/0, merge=0/0, ticks=3112/0, in_queue=3112, util=96.14% 00:23:43.843 nvme0n3: ios=70/0, merge=0/0, ticks=2837/0, in_queue=2837, util=96.22% 00:23:43.843 nvme0n4: ios=109/0, merge=0/0, ticks=2898/0, in_queue=2898, util=99.22% 00:23:44.106 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:44.106 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:23:44.363 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:44.363 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:23:44.363 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:44.364 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:23:44.620 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:44.620 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:23:44.878 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:23:44.878 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 552109 00:23:44.878 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:23:44.878 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:44.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:44.878 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:44.878 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:23:44.878 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:44.878 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:44.878 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:44.878 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:44.878 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:23:44.878 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:23:44.878 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:23:44.878 nvmf hotplug test: fio failed as expected 00:23:44.878 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.136 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.136 rmmod nvme_tcp 00:23:45.136 rmmod nvme_fabrics 00:23:45.136 rmmod nvme_keyring 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 549350 ']' 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 549350 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 549350 ']' 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 549350 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 549350 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 549350' 00:23:45.394 killing process with pid 549350 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 549350 00:23:45.394 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 549350 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.652 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.550 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.550 00:23:47.550 real 0m26.162s 00:23:47.550 user 1m46.170s 00:23:47.550 sys 0m7.598s 00:23:47.550 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.550 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.550 ************************************ 00:23:47.550 END TEST nvmf_fio_target 00:23:47.550 ************************************ 00:23:47.550 10:34:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:47.550 10:34:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.550 10:34:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.550 10:34:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:23:47.809 ************************************ 00:23:47.809 START TEST nvmf_bdevio 00:23:47.809 ************************************ 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:47.809 * Looking for test storage... 00:23:47.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:47.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.809 --rc genhtml_branch_coverage=1 00:23:47.809 --rc genhtml_function_coverage=1 00:23:47.809 --rc genhtml_legend=1 00:23:47.809 --rc geninfo_all_blocks=1 00:23:47.809 --rc geninfo_unexecuted_blocks=1 00:23:47.809 00:23:47.809 ' 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:47.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.809 --rc genhtml_branch_coverage=1 00:23:47.809 --rc genhtml_function_coverage=1 00:23:47.809 --rc genhtml_legend=1 00:23:47.809 --rc geninfo_all_blocks=1 00:23:47.809 --rc geninfo_unexecuted_blocks=1 00:23:47.809 00:23:47.809 ' 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:47.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.809 --rc genhtml_branch_coverage=1 00:23:47.809 --rc genhtml_function_coverage=1 00:23:47.809 --rc genhtml_legend=1 00:23:47.809 --rc geninfo_all_blocks=1 00:23:47.809 --rc geninfo_unexecuted_blocks=1 00:23:47.809 00:23:47.809 ' 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:47.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.809 --rc genhtml_branch_coverage=1 00:23:47.809 --rc genhtml_function_coverage=1 00:23:47.809 --rc genhtml_legend=1 00:23:47.809 --rc geninfo_all_blocks=1 00:23:47.809 --rc geninfo_unexecuted_blocks=1 00:23:47.809 00:23:47.809 ' 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.809 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.810 10:34:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:53.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:53.071 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:53.071 Found net devices under 0000:86:00.0: cvl_0_0 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.071 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:53.072 Found net devices under 0000:86:00.1: cvl_0_1 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.072 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:23:53.072 00:23:53.072 --- 10.0.0.2 ping statistics --- 00:23:53.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.072 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:23:53.072 00:23:53.072 --- 10.0.0.1 ping statistics --- 00:23:53.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.072 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=556547 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 556547 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 556547 ']' 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:53.072 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:23:53.331 [2024-12-09 10:34:54.273293] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:23:53.331 [2024-12-09 10:34:54.273339] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.331 [2024-12-09 10:34:54.343086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.331 [2024-12-09 10:34:54.385554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.331 [2024-12-09 10:34:54.385591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.331 [2024-12-09 10:34:54.385599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.331 [2024-12-09 10:34:54.385605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.331 [2024-12-09 10:34:54.385610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.331 [2024-12-09 10:34:54.387272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:53.331 [2024-12-09 10:34:54.387383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:53.331 [2024-12-09 10:34:54.387490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.331 [2024-12-09 10:34:54.387491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:53.331 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.331 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:23:53.331 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.331 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.331 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:53.589 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.589 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.589 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.589 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:53.589 [2024-12-09 10:34:54.525151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:53.590 Malloc0 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:53.590 [2024-12-09 10:34:54.593232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.590 { 00:23:53.590 "params": { 00:23:53.590 "name": "Nvme$subsystem", 00:23:53.590 "trtype": "$TEST_TRANSPORT", 00:23:53.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.590 "adrfam": "ipv4", 00:23:53.590 "trsvcid": "$NVMF_PORT", 00:23:53.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.590 "hdgst": ${hdgst:-false}, 00:23:53.590 "ddgst": ${ddgst:-false} 00:23:53.590 }, 00:23:53.590 "method": "bdev_nvme_attach_controller" 00:23:53.590 } 00:23:53.590 EOF 00:23:53.590 )") 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:23:53.590 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:53.590 "params": { 00:23:53.590 "name": "Nvme1", 00:23:53.590 "trtype": "tcp", 00:23:53.590 "traddr": "10.0.0.2", 00:23:53.590 "adrfam": "ipv4", 00:23:53.590 "trsvcid": "4420", 00:23:53.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.590 "hdgst": false, 00:23:53.590 "ddgst": false 00:23:53.590 }, 00:23:53.590 "method": "bdev_nvme_attach_controller" 00:23:53.590 }' 00:23:53.590 [2024-12-09 10:34:54.640959] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:23:53.590 [2024-12-09 10:34:54.641017] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid556734 ] 00:23:53.590 [2024-12-09 10:34:54.708036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:53.590 [2024-12-09 10:34:54.752447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.590 [2024-12-09 10:34:54.752541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.590 [2024-12-09 10:34:54.752544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.848 I/O targets: 00:23:53.848 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:53.848 00:23:53.848 00:23:53.848 CUnit - A unit testing framework for C - Version 2.1-3 00:23:53.848 http://cunit.sourceforge.net/ 00:23:53.848 00:23:53.848 00:23:53.848 Suite: bdevio tests on: Nvme1n1 00:23:53.848 Test: blockdev write read block ...passed 00:23:53.848 Test: blockdev write zeroes read block ...passed 00:23:53.848 Test: blockdev write zeroes read no split ...passed 00:23:54.112 Test: blockdev write zeroes read split ...passed 00:23:54.112 Test: blockdev write zeroes read split partial ...passed 00:23:54.112 Test: blockdev reset ...[2024-12-09 10:34:55.112797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:54.112 [2024-12-09 10:34:55.112857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5ff30 (9): Bad file descriptor 00:23:54.112 [2024-12-09 10:34:55.124995] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:54.112 passed 00:23:54.112 Test: blockdev write read 8 blocks ...passed 00:23:54.112 Test: blockdev write read size > 128k ...passed 00:23:54.112 Test: blockdev write read invalid size ...passed 00:23:54.112 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:54.112 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:54.112 Test: blockdev write read max offset ...passed 00:23:54.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:54.372 Test: blockdev writev readv 8 blocks ...passed 00:23:54.372 Test: blockdev writev readv 30 x 1block ...passed 00:23:54.372 Test: blockdev writev readv block ...passed 00:23:54.372 Test: blockdev writev readv size > 128k ...passed 00:23:54.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:54.372 Test: blockdev comparev and writev ...[2024-12-09 10:34:55.337938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:54.372 [2024-12-09 10:34:55.337967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.372 [2024-12-09 10:34:55.337981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:54.373 [2024-12-09 10:34:55.337989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:54.373 [2024-12-09 10:34:55.338250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:54.373 [2024-12-09 10:34:55.338261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:54.373 [2024-12-09 10:34:55.338273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:54.373 [2024-12-09 10:34:55.338280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:54.373 [2024-12-09 10:34:55.338520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:54.373 [2024-12-09 10:34:55.338530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:54.373 [2024-12-09 10:34:55.338542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:54.373 [2024-12-09 10:34:55.338549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:54.373 [2024-12-09 10:34:55.338837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:54.373 [2024-12-09 10:34:55.338848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:54.373 [2024-12-09 10:34:55.338859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:54.373 [2024-12-09 10:34:55.338867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:54.373 passed 00:23:54.373 Test: blockdev nvme passthru rw ...passed 00:23:54.373 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:34:55.422406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:54.373 [2024-12-09 10:34:55.422421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:54.373 [2024-12-09 10:34:55.422544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:54.373 [2024-12-09 10:34:55.422555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:54.373 [2024-12-09 10:34:55.422680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:54.373 [2024-12-09 10:34:55.422694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:54.373 [2024-12-09 10:34:55.422819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:54.373 [2024-12-09 10:34:55.422829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:54.373 passed 00:23:54.373 Test: blockdev nvme admin passthru ...passed 00:23:54.373 Test: blockdev copy ...passed 00:23:54.373 00:23:54.373 Run Summary: Type Total Ran Passed Failed Inactive 00:23:54.373 suites 1 1 n/a 0 0 00:23:54.373 tests 23 23 23 0 0 00:23:54.373 asserts 152 152 152 0 n/a 00:23:54.373 00:23:54.373 Elapsed time = 1.121 seconds 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.631 rmmod nvme_tcp 00:23:54.631 rmmod nvme_fabrics 00:23:54.631 rmmod nvme_keyring 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 556547 ']' 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 556547 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 556547 ']' 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 556547 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 556547 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 556547' 00:23:54.631 killing process with pid 556547 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 556547 00:23:54.631 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 556547 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.889 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:57.420 00:23:57.420 real 0m9.339s 00:23:57.420 user 0m9.691s 00:23:57.420 sys 0m4.473s 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:57.420 ************************************ 00:23:57.420 END TEST nvmf_bdevio 00:23:57.420 ************************************ 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:57.420 00:23:57.420 real 4m30.325s 00:23:57.420 user 10m18.314s 00:23:57.420 sys 1m32.958s 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:23:57.420 ************************************ 00:23:57.420 END TEST nvmf_target_core 00:23:57.420 ************************************ 00:23:57.420 10:34:58 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:23:57.420 10:34:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:57.420 10:34:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.420 10:34:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:57.420 ************************************ 00:23:57.420 START TEST nvmf_target_extra 00:23:57.420 ************************************ 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:23:57.420 * Looking for test storage... 00:23:57.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:57.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.420 --rc genhtml_branch_coverage=1 00:23:57.420 --rc genhtml_function_coverage=1 00:23:57.420 --rc genhtml_legend=1 00:23:57.420 --rc geninfo_all_blocks=1 00:23:57.420 --rc geninfo_unexecuted_blocks=1 00:23:57.420 00:23:57.420 ' 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:57.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.420 --rc genhtml_branch_coverage=1 00:23:57.420 --rc genhtml_function_coverage=1 00:23:57.420 --rc genhtml_legend=1 00:23:57.420 --rc geninfo_all_blocks=1 00:23:57.420 --rc geninfo_unexecuted_blocks=1 00:23:57.420 00:23:57.420 ' 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:57.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.420 --rc genhtml_branch_coverage=1 00:23:57.420 --rc genhtml_function_coverage=1 00:23:57.420 --rc genhtml_legend=1 00:23:57.420 --rc geninfo_all_blocks=1 00:23:57.420 --rc geninfo_unexecuted_blocks=1 00:23:57.420 00:23:57.420 ' 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:57.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.420 --rc genhtml_branch_coverage=1 00:23:57.420 --rc genhtml_function_coverage=1 00:23:57.420 --rc genhtml_legend=1 00:23:57.420 --rc geninfo_all_blocks=1 00:23:57.420 --rc geninfo_unexecuted_blocks=1 00:23:57.420 00:23:57.420 ' 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.420 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:57.421 ************************************ 00:23:57.421 START TEST nvmf_example 00:23:57.421 ************************************ 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:23:57.421 * Looking for test storage... 00:23:57.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:57.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.421 --rc genhtml_branch_coverage=1 00:23:57.421 --rc genhtml_function_coverage=1 00:23:57.421 --rc genhtml_legend=1 00:23:57.421 --rc geninfo_all_blocks=1 00:23:57.421 --rc geninfo_unexecuted_blocks=1 00:23:57.421 00:23:57.421 ' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:57.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.421 --rc genhtml_branch_coverage=1 00:23:57.421 --rc genhtml_function_coverage=1 00:23:57.421 --rc genhtml_legend=1 00:23:57.421 --rc geninfo_all_blocks=1 00:23:57.421 --rc geninfo_unexecuted_blocks=1 00:23:57.421 00:23:57.421 ' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:57.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.421 --rc genhtml_branch_coverage=1 00:23:57.421 --rc genhtml_function_coverage=1 00:23:57.421 --rc genhtml_legend=1 00:23:57.421 --rc geninfo_all_blocks=1 00:23:57.421 --rc geninfo_unexecuted_blocks=1 00:23:57.421 00:23:57.421 ' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:57.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.421 --rc genhtml_branch_coverage=1 00:23:57.421 --rc genhtml_function_coverage=1 00:23:57.421 --rc genhtml_legend=1 00:23:57.421 --rc geninfo_all_blocks=1 00:23:57.421 --rc geninfo_unexecuted_blocks=1 00:23:57.421 00:23:57.421 ' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.421 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.679 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:02.948 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:02.948 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.948 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:02.949 Found net devices under 0000:86:00.0: cvl_0_0 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:02.949 Found net devices under 0000:86:00.1: cvl_0_1 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.949 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.949 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.949 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.949 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.949 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:24:03.208 00:24:03.208 --- 10.0.0.2 ping statistics --- 00:24:03.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.208 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:24:03.208 00:24:03.208 --- 10.0.0.1 ping statistics --- 00:24:03.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.208 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=560536 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 560536 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 560536 ']' 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.208 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:04.142 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:16.332 Initializing NVMe Controllers 00:24:16.332 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:16.332 Initialization complete. Launching workers. 00:24:16.332 ======================================================== 00:24:16.332 Latency(us) 00:24:16.332 Device Information : IOPS MiB/s Average min max 00:24:16.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17989.23 70.27 3557.03 618.21 15599.69 00:24:16.332 ======================================================== 00:24:16.332 Total : 17989.23 70.27 3557.03 618.21 15599.69 00:24:16.332 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.332 rmmod nvme_tcp 00:24:16.332 rmmod nvme_fabrics 00:24:16.332 rmmod nvme_keyring 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 560536 ']' 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 560536 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 560536 ']' 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 560536 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 560536 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 560536' 00:24:16.332 killing process with pid 560536 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 560536 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 560536 00:24:16.332 nvmf threads initialize successfully 00:24:16.332 bdev subsystem init successfully 00:24:16.332 created a nvmf target service 00:24:16.332 create targets's poll groups done 00:24:16.332 all subsystems of target started 00:24:16.332 nvmf target is running 00:24:16.332 all subsystems of target stopped 00:24:16.332 destroy targets's poll groups done 00:24:16.332 destroyed the nvmf target service 00:24:16.332 bdev subsystem finish successfully 00:24:16.332 nvmf threads destroy successfully 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.332 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.333 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.333 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:24:16.333 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:24:16.333 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.333 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.333 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.333 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.333 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.333 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.333 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:16.897 00:24:16.897 real 0m19.491s 00:24:16.897 user 0m46.053s 00:24:16.897 sys 0m5.755s 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:24:16.897 ************************************ 00:24:16.897 END TEST nvmf_example 00:24:16.897 ************************************ 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.897 ************************************ 00:24:16.897 START TEST nvmf_filesystem 00:24:16.897 ************************************ 00:24:16.897 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:24:16.897 * Looking for test storage... 00:24:16.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:16.897 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:16.897 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:24:16.897 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:17.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.158 --rc genhtml_branch_coverage=1 00:24:17.158 --rc genhtml_function_coverage=1 00:24:17.158 --rc genhtml_legend=1 00:24:17.158 --rc geninfo_all_blocks=1 00:24:17.158 --rc geninfo_unexecuted_blocks=1 00:24:17.158 00:24:17.158 ' 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:17.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.158 --rc genhtml_branch_coverage=1 00:24:17.158 --rc genhtml_function_coverage=1 00:24:17.158 --rc genhtml_legend=1 00:24:17.158 --rc geninfo_all_blocks=1 00:24:17.158 --rc geninfo_unexecuted_blocks=1 00:24:17.158 00:24:17.158 ' 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:17.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.158 --rc genhtml_branch_coverage=1 00:24:17.158 --rc genhtml_function_coverage=1 00:24:17.158 --rc genhtml_legend=1 00:24:17.158 --rc geninfo_all_blocks=1 00:24:17.158 --rc geninfo_unexecuted_blocks=1 00:24:17.158 00:24:17.158 ' 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:17.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.158 --rc genhtml_branch_coverage=1 00:24:17.158 --rc genhtml_function_coverage=1 00:24:17.158 --rc genhtml_legend=1 00:24:17.158 --rc geninfo_all_blocks=1 00:24:17.158 --rc geninfo_unexecuted_blocks=1 00:24:17.158 00:24:17.158 ' 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:24:17.158 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:24:17.159 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:17.159 #define SPDK_CONFIG_H 00:24:17.159 #define SPDK_CONFIG_AIO_FSDEV 1 00:24:17.159 #define SPDK_CONFIG_APPS 1 00:24:17.159 #define SPDK_CONFIG_ARCH native 00:24:17.159 #undef SPDK_CONFIG_ASAN 00:24:17.159 #undef SPDK_CONFIG_AVAHI 00:24:17.159 #undef SPDK_CONFIG_CET 00:24:17.159 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:24:17.159 #define SPDK_CONFIG_COVERAGE 1 00:24:17.159 #define SPDK_CONFIG_CROSS_PREFIX 00:24:17.159 #undef SPDK_CONFIG_CRYPTO 00:24:17.159 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:17.159 #undef SPDK_CONFIG_CUSTOMOCF 00:24:17.159 #undef SPDK_CONFIG_DAOS 00:24:17.159 #define SPDK_CONFIG_DAOS_DIR 00:24:17.159 #define SPDK_CONFIG_DEBUG 1 00:24:17.159 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:17.159 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:24:17.159 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:17.159 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:17.159 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:17.159 #undef SPDK_CONFIG_DPDK_UADK 00:24:17.159 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:24:17.159 #define SPDK_CONFIG_EXAMPLES 1 00:24:17.159 #undef SPDK_CONFIG_FC 00:24:17.159 #define SPDK_CONFIG_FC_PATH 00:24:17.159 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:17.159 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:17.159 #define SPDK_CONFIG_FSDEV 1 00:24:17.159 #undef SPDK_CONFIG_FUSE 00:24:17.159 #undef SPDK_CONFIG_FUZZER 00:24:17.159 #define SPDK_CONFIG_FUZZER_LIB 00:24:17.159 #undef SPDK_CONFIG_GOLANG 00:24:17.159 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:17.159 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:24:17.159 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:17.159 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:24:17.159 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:17.159 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:17.159 #undef SPDK_CONFIG_HAVE_LZ4 00:24:17.159 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:24:17.159 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:24:17.159 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:17.159 #define SPDK_CONFIG_IDXD 1 00:24:17.159 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:17.159 #undef SPDK_CONFIG_IPSEC_MB 00:24:17.159 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:17.159 #define SPDK_CONFIG_ISAL 1 00:24:17.159 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:17.159 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:17.159 #define SPDK_CONFIG_LIBDIR 00:24:17.159 #undef SPDK_CONFIG_LTO 00:24:17.159 #define SPDK_CONFIG_MAX_LCORES 128 00:24:17.159 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:24:17.159 #define SPDK_CONFIG_NVME_CUSE 1 00:24:17.159 #undef SPDK_CONFIG_OCF 00:24:17.159 #define SPDK_CONFIG_OCF_PATH 00:24:17.159 #define SPDK_CONFIG_OPENSSL_PATH 00:24:17.159 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:17.159 #define SPDK_CONFIG_PGO_DIR 00:24:17.159 #undef SPDK_CONFIG_PGO_USE 00:24:17.159 #define SPDK_CONFIG_PREFIX /usr/local 00:24:17.159 #undef SPDK_CONFIG_RAID5F 00:24:17.159 #undef SPDK_CONFIG_RBD 00:24:17.159 #define SPDK_CONFIG_RDMA 1 00:24:17.159 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:17.159 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:17.159 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:17.159 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:17.159 #define SPDK_CONFIG_SHARED 1 00:24:17.159 #undef SPDK_CONFIG_SMA 00:24:17.159 #define SPDK_CONFIG_TESTS 1 00:24:17.159 #undef SPDK_CONFIG_TSAN 00:24:17.160 #define SPDK_CONFIG_UBLK 1 00:24:17.160 #define SPDK_CONFIG_UBSAN 1 00:24:17.160 #undef SPDK_CONFIG_UNIT_TESTS 00:24:17.160 #undef SPDK_CONFIG_URING 00:24:17.160 #define SPDK_CONFIG_URING_PATH 00:24:17.160 #undef SPDK_CONFIG_URING_ZNS 00:24:17.160 #undef SPDK_CONFIG_USDT 00:24:17.160 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:17.160 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:17.160 #define SPDK_CONFIG_VFIO_USER 1 00:24:17.160 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:17.160 #define SPDK_CONFIG_VHOST 1 00:24:17.160 #define SPDK_CONFIG_VIRTIO 1 00:24:17.160 #undef SPDK_CONFIG_VTUNE 00:24:17.160 #define SPDK_CONFIG_VTUNE_DIR 00:24:17.160 #define SPDK_CONFIG_WERROR 1 00:24:17.160 #define SPDK_CONFIG_WPDK_DIR 00:24:17.160 #undef SPDK_CONFIG_XNVME 00:24:17.160 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:24:17.160 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:17.161 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 562882 ]] 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 562882 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ZF3EkV 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ZF3EkV/tests/target /tmp/spdk.ZF3EkV 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:24:17.162 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189261697024 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963953152 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6702256128 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971945472 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981976576 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981235200 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981976576 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=741376 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:24:17.163 * Looking for test storage... 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189261697024 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8916848640 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:24:17.163 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.422 --rc genhtml_branch_coverage=1 00:24:17.422 --rc genhtml_function_coverage=1 00:24:17.422 --rc genhtml_legend=1 00:24:17.422 --rc geninfo_all_blocks=1 00:24:17.422 --rc geninfo_unexecuted_blocks=1 00:24:17.422 00:24:17.422 ' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.422 --rc genhtml_branch_coverage=1 00:24:17.422 --rc genhtml_function_coverage=1 00:24:17.422 --rc genhtml_legend=1 00:24:17.422 --rc geninfo_all_blocks=1 00:24:17.422 --rc geninfo_unexecuted_blocks=1 00:24:17.422 00:24:17.422 ' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.422 --rc genhtml_branch_coverage=1 00:24:17.422 --rc genhtml_function_coverage=1 00:24:17.422 --rc genhtml_legend=1 00:24:17.422 --rc geninfo_all_blocks=1 00:24:17.422 --rc geninfo_unexecuted_blocks=1 00:24:17.422 00:24:17.422 ' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.422 --rc genhtml_branch_coverage=1 00:24:17.422 --rc genhtml_function_coverage=1 00:24:17.422 --rc genhtml_legend=1 00:24:17.422 --rc geninfo_all_blocks=1 00:24:17.422 --rc geninfo_unexecuted_blocks=1 00:24:17.422 00:24:17.422 ' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.422 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:22.868 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.868 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:22.869 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:22.869 Found net devices under 0000:86:00.0: cvl_0_0 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:22.869 Found net devices under 0000:86:00.1: cvl_0_1 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:24:22.869 00:24:22.869 --- 10.0.0.2 ping statistics --- 00:24:22.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.869 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:24:22.869 00:24:22.869 --- 10.0.0.1 ping statistics --- 00:24:22.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.869 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.869 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.869 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:24:22.869 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:22.869 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.869 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:24:23.128 ************************************ 00:24:23.128 START TEST nvmf_filesystem_no_in_capsule 00:24:23.128 ************************************ 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=566001 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 566001 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 566001 ']' 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.128 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:23.128 [2024-12-09 10:35:24.109024] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:24:23.128 [2024-12-09 10:35:24.109070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.128 [2024-12-09 10:35:24.179293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.128 [2024-12-09 10:35:24.223272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.128 [2024-12-09 10:35:24.223309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.128 [2024-12-09 10:35:24.223320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.128 [2024-12-09 10:35:24.223326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.128 [2024-12-09 10:35:24.223331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.128 [2024-12-09 10:35:24.224767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.128 [2024-12-09 10:35:24.224862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.128 [2024-12-09 10:35:24.224927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.128 [2024-12-09 10:35:24.224929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:23.387 [2024-12-09 10:35:24.363645] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:23.387 Malloc1 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:23.387 [2024-12-09 10:35:24.530743] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:24:23.387 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:24:23.388 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:24:23.388 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.388 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:23.388 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.388 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:23.388 { 00:24:23.388 "name": "Malloc1", 00:24:23.388 "aliases": [ 00:24:23.388 "87151e22-f1df-47bb-b83f-c2b4ed493919" 00:24:23.388 ], 00:24:23.388 "product_name": "Malloc disk", 00:24:23.388 "block_size": 512, 00:24:23.388 "num_blocks": 1048576, 00:24:23.388 "uuid": "87151e22-f1df-47bb-b83f-c2b4ed493919", 00:24:23.388 "assigned_rate_limits": { 00:24:23.388 "rw_ios_per_sec": 0, 00:24:23.388 "rw_mbytes_per_sec": 0, 00:24:23.388 "r_mbytes_per_sec": 0, 00:24:23.388 "w_mbytes_per_sec": 0 00:24:23.388 }, 00:24:23.388 "claimed": true, 00:24:23.388 "claim_type": "exclusive_write", 00:24:23.388 "zoned": false, 00:24:23.388 "supported_io_types": { 00:24:23.388 "read": true, 00:24:23.388 "write": true, 00:24:23.388 "unmap": true, 00:24:23.388 "flush": true, 00:24:23.388 "reset": true, 00:24:23.388 "nvme_admin": false, 00:24:23.388 "nvme_io": false, 00:24:23.388 "nvme_io_md": false, 00:24:23.388 "write_zeroes": true, 00:24:23.388 "zcopy": true, 00:24:23.388 "get_zone_info": false, 00:24:23.388 "zone_management": false, 00:24:23.388 "zone_append": false, 00:24:23.388 "compare": false, 00:24:23.388 "compare_and_write": false, 00:24:23.388 "abort": true, 00:24:23.388 "seek_hole": false, 00:24:23.388 "seek_data": false, 00:24:23.388 "copy": true, 00:24:23.388 "nvme_iov_md": false 00:24:23.388 }, 00:24:23.388 "memory_domains": [ 00:24:23.388 { 00:24:23.388 "dma_device_id": "system", 00:24:23.388 "dma_device_type": 1 00:24:23.388 }, 00:24:23.388 { 00:24:23.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.388 "dma_device_type": 2 00:24:23.388 } 00:24:23.388 ], 00:24:23.388 "driver_specific": {} 00:24:23.388 } 00:24:23.388 ]' 00:24:23.646 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:23.646 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:24:23.646 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:23.646 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:24:23.646 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:24:23.646 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:24:23.646 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:24:23.646 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:24.578 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:24:24.578 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:24:24.578 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:24.578 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:24.578 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:24:27.103 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:27.103 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:27.103 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:24:27.103 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:27.103 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.103 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:24:27.103 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:24:27.103 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:24:27.104 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:24:27.104 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:24:27.104 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:24:27.104 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:27.104 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:24:27.104 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:24:27.104 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:24:27.104 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:24:27.104 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:24:27.104 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:24:27.668 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:24:28.601 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:24:28.601 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:24:28.601 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:28.602 ************************************ 00:24:28.602 START TEST filesystem_ext4 00:24:28.602 ************************************ 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:24:28.602 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:24:28.602 mke2fs 1.47.0 (5-Feb-2023) 00:24:28.861 Discarding device blocks: 0/522240 done 00:24:28.861 Creating filesystem with 522240 1k blocks and 130560 inodes 00:24:28.861 Filesystem UUID: 720139a9-33ea-4be9-bc88-d0dc1dc80944 00:24:28.861 Superblock backups stored on blocks: 00:24:28.861 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:24:28.861 00:24:28.861 Allocating group tables: 0/64 done 00:24:28.861 Writing inode tables: 0/64 done 00:24:32.140 Creating journal (8192 blocks): done 00:24:33.897 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:24:33.897 00:24:33.897 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:24:33.897 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 566001 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:24:40.450 00:24:40.450 real 0m11.189s 00:24:40.450 user 0m0.032s 00:24:40.450 sys 0m0.072s 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:24:40.450 ************************************ 00:24:40.450 END TEST filesystem_ext4 00:24:40.450 ************************************ 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:40.450 ************************************ 00:24:40.450 START TEST filesystem_btrfs 00:24:40.450 ************************************ 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:24:40.450 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:24:40.450 btrfs-progs v6.8.1 00:24:40.450 See https://btrfs.readthedocs.io for more information. 00:24:40.450 00:24:40.450 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:24:40.450 NOTE: several default settings have changed in version 5.15, please make sure 00:24:40.450 this does not affect your deployments: 00:24:40.450 - DUP for metadata (-m dup) 00:24:40.450 - enabled no-holes (-O no-holes) 00:24:40.450 - enabled free-space-tree (-R free-space-tree) 00:24:40.450 00:24:40.450 Label: (null) 00:24:40.450 UUID: 4b29286b-6ff1-460c-b90e-29ccce12dde3 00:24:40.450 Node size: 16384 00:24:40.450 Sector size: 4096 (CPU page size: 4096) 00:24:40.450 Filesystem size: 510.00MiB 00:24:40.450 Block group profiles: 00:24:40.451 Data: single 8.00MiB 00:24:40.451 Metadata: DUP 32.00MiB 00:24:40.451 System: DUP 8.00MiB 00:24:40.451 SSD detected: yes 00:24:40.451 Zoned device: no 00:24:40.451 Features: extref, skinny-metadata, no-holes, free-space-tree 00:24:40.451 Checksum: crc32c 00:24:40.451 Number of devices: 1 00:24:40.451 Devices: 00:24:40.451 ID SIZE PATH 00:24:40.451 1 510.00MiB /dev/nvme0n1p1 00:24:40.451 00:24:40.451 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:24:40.451 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:24:41.016 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:24:41.016 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:24:41.016 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:24:41.016 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:24:41.016 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:24:41.016 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:24:41.016 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 566001 00:24:41.016 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:24:41.016 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:24:41.016 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:24:41.017 00:24:41.017 real 0m1.116s 00:24:41.017 user 0m0.030s 00:24:41.017 sys 0m0.108s 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:24:41.017 ************************************ 00:24:41.017 END TEST filesystem_btrfs 00:24:41.017 ************************************ 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:41.017 ************************************ 00:24:41.017 START TEST filesystem_xfs 00:24:41.017 ************************************ 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:24:41.017 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:24:41.275 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:24:41.275 = sectsz=512 attr=2, projid32bit=1 00:24:41.275 = crc=1 finobt=1, sparse=1, rmapbt=0 00:24:41.275 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:24:41.275 data = bsize=4096 blocks=130560, imaxpct=25 00:24:41.275 = sunit=0 swidth=0 blks 00:24:41.275 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:24:41.275 log =internal log bsize=4096 blocks=16384, version=2 00:24:41.275 = sectsz=512 sunit=0 blks, lazy-count=1 00:24:41.275 realtime =none extsz=4096 blocks=0, rtextents=0 00:24:41.840 Discarding blocks...Done. 00:24:41.840 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:24:41.840 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 566001 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:24:43.740 00:24:43.740 real 0m2.616s 00:24:43.740 user 0m0.025s 00:24:43.740 sys 0m0.073s 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:24:43.740 ************************************ 00:24:43.740 END TEST filesystem_xfs 00:24:43.740 ************************************ 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:43.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:43.740 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 566001 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 566001 ']' 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 566001 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 566001 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 566001' 00:24:43.999 killing process with pid 566001 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 566001 00:24:43.999 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 566001 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:24:44.258 00:24:44.258 real 0m21.285s 00:24:44.258 user 1m23.804s 00:24:44.258 sys 0m1.463s 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:44.258 ************************************ 00:24:44.258 END TEST nvmf_filesystem_no_in_capsule 00:24:44.258 ************************************ 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:24:44.258 ************************************ 00:24:44.258 START TEST nvmf_filesystem_in_capsule 00:24:44.258 ************************************ 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=569685 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 569685 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 569685 ']' 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.258 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:44.516 [2024-12-09 10:35:45.475690] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:24:44.516 [2024-12-09 10:35:45.475739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.516 [2024-12-09 10:35:45.549065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.516 [2024-12-09 10:35:45.589830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.516 [2024-12-09 10:35:45.589867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.516 [2024-12-09 10:35:45.589876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.516 [2024-12-09 10:35:45.589882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.516 [2024-12-09 10:35:45.589887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.516 [2024-12-09 10:35:45.591384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.516 [2024-12-09 10:35:45.591500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.516 [2024-12-09 10:35:45.591585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.516 [2024-12-09 10:35:45.591586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:44.774 [2024-12-09 10:35:45.734104] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:44.774 Malloc1 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:44.774 [2024-12-09 10:35:45.903413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:44.774 { 00:24:44.774 "name": "Malloc1", 00:24:44.774 "aliases": [ 00:24:44.774 "a809b77c-eb77-43aa-a0ea-128293d0048b" 00:24:44.774 ], 00:24:44.774 "product_name": "Malloc disk", 00:24:44.774 "block_size": 512, 00:24:44.774 "num_blocks": 1048576, 00:24:44.774 "uuid": "a809b77c-eb77-43aa-a0ea-128293d0048b", 00:24:44.774 "assigned_rate_limits": { 00:24:44.774 "rw_ios_per_sec": 0, 00:24:44.774 "rw_mbytes_per_sec": 0, 00:24:44.774 "r_mbytes_per_sec": 0, 00:24:44.774 "w_mbytes_per_sec": 0 00:24:44.774 }, 00:24:44.774 "claimed": true, 00:24:44.774 "claim_type": "exclusive_write", 00:24:44.774 "zoned": false, 00:24:44.774 "supported_io_types": { 00:24:44.774 "read": true, 00:24:44.774 "write": true, 00:24:44.774 "unmap": true, 00:24:44.774 "flush": true, 00:24:44.774 "reset": true, 00:24:44.774 "nvme_admin": false, 00:24:44.774 "nvme_io": false, 00:24:44.774 "nvme_io_md": false, 00:24:44.774 "write_zeroes": true, 00:24:44.774 "zcopy": true, 00:24:44.774 "get_zone_info": false, 00:24:44.774 "zone_management": false, 00:24:44.774 "zone_append": false, 00:24:44.774 "compare": false, 00:24:44.774 "compare_and_write": false, 00:24:44.774 "abort": true, 00:24:44.774 "seek_hole": false, 00:24:44.774 "seek_data": false, 00:24:44.774 "copy": true, 00:24:44.774 "nvme_iov_md": false 00:24:44.774 }, 00:24:44.774 "memory_domains": [ 00:24:44.774 { 00:24:44.774 "dma_device_id": "system", 00:24:44.774 "dma_device_type": 1 00:24:44.774 }, 00:24:44.774 { 00:24:44.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.774 "dma_device_type": 2 00:24:44.774 } 00:24:44.774 ], 00:24:44.774 "driver_specific": {} 00:24:44.774 } 00:24:44.774 ]' 00:24:44.774 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:45.031 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:24:45.031 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:45.031 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:24:45.031 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:24:45.031 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:24:45.031 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:24:45.031 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:45.961 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:24:45.961 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:24:45.961 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:45.961 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:45.961 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:24:48.484 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:24:49.051 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:49.994 ************************************ 00:24:49.994 START TEST filesystem_in_capsule_ext4 00:24:49.994 ************************************ 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:24:49.994 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:24:49.994 mke2fs 1.47.0 (5-Feb-2023) 00:24:49.994 Discarding device blocks: 0/522240 done 00:24:49.994 Creating filesystem with 522240 1k blocks and 130560 inodes 00:24:49.994 Filesystem UUID: d02f8fe1-73e2-4379-8e1b-36334a026132 00:24:49.994 Superblock backups stored on blocks: 00:24:49.994 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:24:49.994 00:24:49.994 Allocating group tables: 0/64 done 00:24:49.994 Writing inode tables: 0/64 done 00:24:50.927 Creating journal (8192 blocks): done 00:24:53.126 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:24:53.126 00:24:53.126 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:24:53.126 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 569685 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:24:58.387 00:24:58.387 real 0m8.513s 00:24:58.387 user 0m0.026s 00:24:58.387 sys 0m0.075s 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:24:58.387 ************************************ 00:24:58.387 END TEST filesystem_in_capsule_ext4 00:24:58.387 ************************************ 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.387 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:58.645 ************************************ 00:24:58.645 START TEST filesystem_in_capsule_btrfs 00:24:58.645 ************************************ 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:24:58.645 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:24:58.645 btrfs-progs v6.8.1 00:24:58.646 See https://btrfs.readthedocs.io for more information. 00:24:58.646 00:24:58.646 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:24:58.646 NOTE: several default settings have changed in version 5.15, please make sure 00:24:58.646 this does not affect your deployments: 00:24:58.646 - DUP for metadata (-m dup) 00:24:58.646 - enabled no-holes (-O no-holes) 00:24:58.646 - enabled free-space-tree (-R free-space-tree) 00:24:58.646 00:24:58.646 Label: (null) 00:24:58.646 UUID: e90a8eed-0af4-4b72-bd75-83053e1c51b3 00:24:58.646 Node size: 16384 00:24:58.646 Sector size: 4096 (CPU page size: 4096) 00:24:58.646 Filesystem size: 510.00MiB 00:24:58.646 Block group profiles: 00:24:58.646 Data: single 8.00MiB 00:24:58.646 Metadata: DUP 32.00MiB 00:24:58.646 System: DUP 8.00MiB 00:24:58.646 SSD detected: yes 00:24:58.646 Zoned device: no 00:24:58.646 Features: extref, skinny-metadata, no-holes, free-space-tree 00:24:58.646 Checksum: crc32c 00:24:58.646 Number of devices: 1 00:24:58.646 Devices: 00:24:58.646 ID SIZE PATH 00:24:58.646 1 510.00MiB /dev/nvme0n1p1 00:24:58.646 00:24:58.646 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:24:58.646 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 569685 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:24:59.580 00:24:59.580 real 0m1.035s 00:24:59.580 user 0m0.028s 00:24:59.580 sys 0m0.114s 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:24:59.580 ************************************ 00:24:59.580 END TEST filesystem_in_capsule_btrfs 00:24:59.580 ************************************ 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:24:59.580 ************************************ 00:24:59.580 START TEST filesystem_in_capsule_xfs 00:24:59.580 ************************************ 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:24:59.580 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:24:59.839 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:24:59.839 = sectsz=512 attr=2, projid32bit=1 00:24:59.839 = crc=1 finobt=1, sparse=1, rmapbt=0 00:24:59.839 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:24:59.839 data = bsize=4096 blocks=130560, imaxpct=25 00:24:59.839 = sunit=0 swidth=0 blks 00:24:59.839 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:24:59.839 log =internal log bsize=4096 blocks=16384, version=2 00:24:59.839 = sectsz=512 sunit=0 blks, lazy-count=1 00:24:59.839 realtime =none extsz=4096 blocks=0, rtextents=0 00:25:01.210 Discarding blocks...Done. 00:25:01.210 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:25:01.210 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:25:03.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 569685 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:25:03.111 00:25:03.111 real 0m3.217s 00:25:03.111 user 0m0.022s 00:25:03.111 sys 0m0.077s 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:25:03.111 ************************************ 00:25:03.111 END TEST filesystem_in_capsule_xfs 00:25:03.111 ************************************ 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:25:03.111 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:03.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 569685 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 569685 ']' 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 569685 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 569685 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 569685' 00:25:03.111 killing process with pid 569685 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 569685 00:25:03.111 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 569685 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:25:03.676 00:25:03.676 real 0m19.165s 00:25:03.676 user 1m15.271s 00:25:03.676 sys 0m1.510s 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:25:03.676 ************************************ 00:25:03.676 END TEST nvmf_filesystem_in_capsule 00:25:03.676 ************************************ 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.676 rmmod nvme_tcp 00:25:03.676 rmmod nvme_fabrics 00:25:03.676 rmmod nvme_keyring 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:25:03.676 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.677 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.207 00:25:06.207 real 0m48.787s 00:25:06.207 user 2m40.995s 00:25:06.207 sys 0m7.423s 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:25:06.207 ************************************ 00:25:06.207 END TEST nvmf_filesystem 00:25:06.207 ************************************ 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:06.207 ************************************ 00:25:06.207 START TEST nvmf_target_discovery 00:25:06.207 ************************************ 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:25:06.207 * Looking for test storage... 00:25:06.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.207 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:06.208 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:25:06.208 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.208 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.208 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:06.208 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:25:06.208 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.208 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:25:06.208 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:06.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.208 --rc genhtml_branch_coverage=1 00:25:06.208 --rc genhtml_function_coverage=1 00:25:06.208 --rc genhtml_legend=1 00:25:06.208 --rc geninfo_all_blocks=1 00:25:06.208 --rc geninfo_unexecuted_blocks=1 00:25:06.208 00:25:06.208 ' 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:06.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.208 --rc genhtml_branch_coverage=1 00:25:06.208 --rc genhtml_function_coverage=1 00:25:06.208 --rc genhtml_legend=1 00:25:06.208 --rc geninfo_all_blocks=1 00:25:06.208 --rc geninfo_unexecuted_blocks=1 00:25:06.208 00:25:06.208 ' 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:06.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.208 --rc genhtml_branch_coverage=1 00:25:06.208 --rc genhtml_function_coverage=1 00:25:06.208 --rc genhtml_legend=1 00:25:06.208 --rc geninfo_all_blocks=1 00:25:06.208 --rc geninfo_unexecuted_blocks=1 00:25:06.208 00:25:06.208 ' 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:06.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.208 --rc genhtml_branch_coverage=1 00:25:06.208 --rc genhtml_function_coverage=1 00:25:06.208 --rc genhtml_legend=1 00:25:06.208 --rc geninfo_all_blocks=1 00:25:06.208 --rc geninfo_unexecuted_blocks=1 00:25:06.208 00:25:06.208 ' 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.208 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.209 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.209 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.209 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.209 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.209 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.209 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.209 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.209 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.209 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.209 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:11.476 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:11.476 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:11.476 Found net devices under 0000:86:00.0: cvl_0_0 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:11.476 Found net devices under 0000:86:00.1: cvl_0_1 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.476 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:11.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:25:11.477 00:25:11.477 --- 10.0.0.2 ping statistics --- 00:25:11.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.477 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:25:11.477 00:25:11.477 --- 10.0.0.1 ping statistics --- 00:25:11.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.477 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:11.477 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=577038 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 577038 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 577038 ']' 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.736 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.736 [2024-12-09 10:36:12.732246] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:25:11.736 [2024-12-09 10:36:12.732295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.736 [2024-12-09 10:36:12.802378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.736 [2024-12-09 10:36:12.845078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.736 [2024-12-09 10:36:12.845115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.736 [2024-12-09 10:36:12.845127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.736 [2024-12-09 10:36:12.845133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.736 [2024-12-09 10:36:12.845138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.736 [2024-12-09 10:36:12.846754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.736 [2024-12-09 10:36:12.846850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.736 [2024-12-09 10:36:12.846939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.736 [2024-12-09 10:36:12.846940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.995 [2024-12-09 10:36:12.985959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.995 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.995 Null1 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.995 [2024-12-09 10:36:13.043158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.995 Null2 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.995 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 Null3 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 Null4 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.996 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:25:12.255 00:25:12.255 Discovery Log Number of Records 6, Generation counter 6 00:25:12.255 =====Discovery Log Entry 0====== 00:25:12.255 trtype: tcp 00:25:12.255 adrfam: ipv4 00:25:12.255 subtype: current discovery subsystem 00:25:12.255 treq: not required 00:25:12.255 portid: 0 00:25:12.255 trsvcid: 4420 00:25:12.255 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:12.255 traddr: 10.0.0.2 00:25:12.255 eflags: explicit discovery connections, duplicate discovery information 00:25:12.255 sectype: none 00:25:12.255 =====Discovery Log Entry 1====== 00:25:12.255 trtype: tcp 00:25:12.255 adrfam: ipv4 00:25:12.255 subtype: nvme subsystem 00:25:12.255 treq: not required 00:25:12.255 portid: 0 00:25:12.255 trsvcid: 4420 00:25:12.255 subnqn: nqn.2016-06.io.spdk:cnode1 00:25:12.255 traddr: 10.0.0.2 00:25:12.255 eflags: none 00:25:12.255 sectype: none 00:25:12.255 =====Discovery Log Entry 2====== 00:25:12.255 trtype: tcp 00:25:12.255 adrfam: ipv4 00:25:12.255 subtype: nvme subsystem 00:25:12.255 treq: not required 00:25:12.255 portid: 0 00:25:12.255 trsvcid: 4420 00:25:12.255 subnqn: nqn.2016-06.io.spdk:cnode2 00:25:12.255 traddr: 10.0.0.2 00:25:12.255 eflags: none 00:25:12.255 sectype: none 00:25:12.255 =====Discovery Log Entry 3====== 00:25:12.255 trtype: tcp 00:25:12.255 adrfam: ipv4 00:25:12.255 subtype: nvme subsystem 00:25:12.255 treq: not required 00:25:12.255 portid: 0 00:25:12.255 trsvcid: 4420 00:25:12.255 subnqn: nqn.2016-06.io.spdk:cnode3 00:25:12.255 traddr: 10.0.0.2 00:25:12.255 eflags: none 00:25:12.255 sectype: none 00:25:12.255 =====Discovery Log Entry 4====== 00:25:12.255 trtype: tcp 00:25:12.255 adrfam: ipv4 00:25:12.255 subtype: nvme subsystem 00:25:12.255 treq: not required 00:25:12.255 portid: 0 00:25:12.255 trsvcid: 4420 00:25:12.255 subnqn: nqn.2016-06.io.spdk:cnode4 00:25:12.255 traddr: 10.0.0.2 00:25:12.255 eflags: none 00:25:12.255 sectype: none 00:25:12.255 =====Discovery Log Entry 5====== 00:25:12.255 trtype: tcp 00:25:12.255 adrfam: ipv4 00:25:12.255 subtype: discovery subsystem referral 00:25:12.255 treq: not required 00:25:12.255 portid: 0 00:25:12.255 trsvcid: 4430 00:25:12.255 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:12.255 traddr: 10.0.0.2 00:25:12.255 eflags: none 00:25:12.255 sectype: none 00:25:12.255 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:25:12.255 Perform nvmf subsystem discovery via RPC 00:25:12.255 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:25:12.255 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.255 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.255 [ 00:25:12.255 { 00:25:12.255 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:12.255 "subtype": "Discovery", 00:25:12.255 "listen_addresses": [ 00:25:12.255 { 00:25:12.255 "trtype": "TCP", 00:25:12.255 "adrfam": "IPv4", 00:25:12.255 "traddr": "10.0.0.2", 00:25:12.255 "trsvcid": "4420" 00:25:12.255 } 00:25:12.255 ], 00:25:12.255 "allow_any_host": true, 00:25:12.255 "hosts": [] 00:25:12.255 }, 00:25:12.255 { 00:25:12.255 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.255 "subtype": "NVMe", 00:25:12.255 "listen_addresses": [ 00:25:12.255 { 00:25:12.255 "trtype": "TCP", 00:25:12.255 "adrfam": "IPv4", 00:25:12.255 "traddr": "10.0.0.2", 00:25:12.255 "trsvcid": "4420" 00:25:12.255 } 00:25:12.255 ], 00:25:12.255 "allow_any_host": true, 00:25:12.255 "hosts": [], 00:25:12.255 "serial_number": "SPDK00000000000001", 00:25:12.255 "model_number": "SPDK bdev Controller", 00:25:12.255 "max_namespaces": 32, 00:25:12.255 "min_cntlid": 1, 00:25:12.255 "max_cntlid": 65519, 00:25:12.255 "namespaces": [ 00:25:12.255 { 00:25:12.255 "nsid": 1, 00:25:12.255 "bdev_name": "Null1", 00:25:12.255 "name": "Null1", 00:25:12.255 "nguid": "8C4C689E1CFE4A858BAB60EFAEF7C132", 00:25:12.255 "uuid": "8c4c689e-1cfe-4a85-8bab-60efaef7c132" 00:25:12.255 } 00:25:12.255 ] 00:25:12.255 }, 00:25:12.255 { 00:25:12.255 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:25:12.255 "subtype": "NVMe", 00:25:12.255 "listen_addresses": [ 00:25:12.255 { 00:25:12.255 "trtype": "TCP", 00:25:12.255 "adrfam": "IPv4", 00:25:12.255 "traddr": "10.0.0.2", 00:25:12.255 "trsvcid": "4420" 00:25:12.255 } 00:25:12.255 ], 00:25:12.255 "allow_any_host": true, 00:25:12.255 "hosts": [], 00:25:12.255 "serial_number": "SPDK00000000000002", 00:25:12.255 "model_number": "SPDK bdev Controller", 00:25:12.255 "max_namespaces": 32, 00:25:12.255 "min_cntlid": 1, 00:25:12.255 "max_cntlid": 65519, 00:25:12.255 "namespaces": [ 00:25:12.255 { 00:25:12.255 "nsid": 1, 00:25:12.255 "bdev_name": "Null2", 00:25:12.255 "name": "Null2", 00:25:12.255 "nguid": "048A23668E73419F8E1AF90BE8A2A563", 00:25:12.255 "uuid": "048a2366-8e73-419f-8e1a-f90be8a2a563" 00:25:12.255 } 00:25:12.255 ] 00:25:12.255 }, 00:25:12.255 { 00:25:12.255 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:25:12.255 "subtype": "NVMe", 00:25:12.255 "listen_addresses": [ 00:25:12.255 { 00:25:12.255 "trtype": "TCP", 00:25:12.255 "adrfam": "IPv4", 00:25:12.255 "traddr": "10.0.0.2", 00:25:12.255 "trsvcid": "4420" 00:25:12.255 } 00:25:12.255 ], 00:25:12.255 "allow_any_host": true, 00:25:12.255 "hosts": [], 00:25:12.255 "serial_number": "SPDK00000000000003", 00:25:12.255 "model_number": "SPDK bdev Controller", 00:25:12.255 "max_namespaces": 32, 00:25:12.255 "min_cntlid": 1, 00:25:12.255 "max_cntlid": 65519, 00:25:12.255 "namespaces": [ 00:25:12.255 { 00:25:12.255 "nsid": 1, 00:25:12.255 "bdev_name": "Null3", 00:25:12.255 "name": "Null3", 00:25:12.255 "nguid": "2E6A8201F7644F84B8C76E05170E3985", 00:25:12.255 "uuid": "2e6a8201-f764-4f84-b8c7-6e05170e3985" 00:25:12.255 } 00:25:12.255 ] 00:25:12.255 }, 00:25:12.255 { 00:25:12.255 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:25:12.255 "subtype": "NVMe", 00:25:12.255 "listen_addresses": [ 00:25:12.255 { 00:25:12.255 "trtype": "TCP", 00:25:12.255 "adrfam": "IPv4", 00:25:12.255 "traddr": "10.0.0.2", 00:25:12.255 "trsvcid": "4420" 00:25:12.255 } 00:25:12.255 ], 00:25:12.255 "allow_any_host": true, 00:25:12.255 "hosts": [], 00:25:12.255 "serial_number": "SPDK00000000000004", 00:25:12.255 "model_number": "SPDK bdev Controller", 00:25:12.255 "max_namespaces": 32, 00:25:12.255 "min_cntlid": 1, 00:25:12.255 "max_cntlid": 65519, 00:25:12.255 "namespaces": [ 00:25:12.255 { 00:25:12.255 "nsid": 1, 00:25:12.255 "bdev_name": "Null4", 00:25:12.255 "name": "Null4", 00:25:12.255 "nguid": "554A3E65223448899A9AD6CF34996642", 00:25:12.255 "uuid": "554a3e65-2234-4889-9a9a-d6cf34996642" 00:25:12.255 } 00:25:12.255 ] 00:25:12.255 } 00:25:12.255 ] 00:25:12.255 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.255 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:25:12.255 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.256 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.515 rmmod nvme_tcp 00:25:12.515 rmmod nvme_fabrics 00:25:12.515 rmmod nvme_keyring 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 577038 ']' 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 577038 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 577038 ']' 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 577038 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 577038 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 577038' 00:25:12.515 killing process with pid 577038 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 577038 00:25:12.515 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 577038 00:25:12.773 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:12.773 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:12.773 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:12.773 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:25:12.773 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:12.773 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:12.773 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:12.773 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:12.773 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:12.774 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.774 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.774 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.677 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:14.677 00:25:14.677 real 0m9.006s 00:25:14.677 user 0m5.384s 00:25:14.677 sys 0m4.569s 00:25:14.677 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.677 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.677 ************************************ 00:25:14.677 END TEST nvmf_target_discovery 00:25:14.677 ************************************ 00:25:14.936 10:36:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:25:14.936 10:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:14.936 10:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.936 10:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:14.936 ************************************ 00:25:14.936 START TEST nvmf_referrals 00:25:14.936 ************************************ 00:25:14.936 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:25:14.936 * Looking for test storage... 00:25:14.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:14.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.936 --rc genhtml_branch_coverage=1 00:25:14.936 --rc genhtml_function_coverage=1 00:25:14.936 --rc genhtml_legend=1 00:25:14.936 --rc geninfo_all_blocks=1 00:25:14.936 --rc geninfo_unexecuted_blocks=1 00:25:14.936 00:25:14.936 ' 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:14.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.936 --rc genhtml_branch_coverage=1 00:25:14.936 --rc genhtml_function_coverage=1 00:25:14.936 --rc genhtml_legend=1 00:25:14.936 --rc geninfo_all_blocks=1 00:25:14.936 --rc geninfo_unexecuted_blocks=1 00:25:14.936 00:25:14.936 ' 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:14.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.936 --rc genhtml_branch_coverage=1 00:25:14.936 --rc genhtml_function_coverage=1 00:25:14.936 --rc genhtml_legend=1 00:25:14.936 --rc geninfo_all_blocks=1 00:25:14.936 --rc geninfo_unexecuted_blocks=1 00:25:14.936 00:25:14.936 ' 00:25:14.936 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:14.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.936 --rc genhtml_branch_coverage=1 00:25:14.937 --rc genhtml_function_coverage=1 00:25:14.937 --rc genhtml_legend=1 00:25:14.937 --rc geninfo_all_blocks=1 00:25:14.937 --rc geninfo_unexecuted_blocks=1 00:25:14.937 00:25:14.937 ' 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.937 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.196 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:20.554 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.554 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:20.555 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:20.555 Found net devices under 0000:86:00.0: cvl_0_0 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:20.555 Found net devices under 0000:86:00.1: cvl_0_1 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:20.555 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:20.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:25:20.815 00:25:20.815 --- 10.0.0.2 ping statistics --- 00:25:20.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.815 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:25:20.815 00:25:20.815 --- 10.0.0.1 ping statistics --- 00:25:20.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.815 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=580733 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 580733 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 580733 ']' 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.815 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.816 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.816 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:20.816 [2024-12-09 10:36:21.846644] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:25:20.816 [2024-12-09 10:36:21.846695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.816 [2024-12-09 10:36:21.916555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.816 [2024-12-09 10:36:21.958230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.816 [2024-12-09 10:36:21.958269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.816 [2024-12-09 10:36:21.958277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.816 [2024-12-09 10:36:21.958283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.816 [2024-12-09 10:36:21.958287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.816 [2024-12-09 10:36:21.959760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.816 [2024-12-09 10:36:21.959854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.816 [2024-12-09 10:36:21.959942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.816 [2024-12-09 10:36:21.959943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 [2024-12-09 10:36:22.110402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 [2024-12-09 10:36:22.137186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.334 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:25:21.335 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:25:21.594 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:25:21.595 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:25:21.595 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:25:21.854 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:25:21.854 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:25:21.854 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:25:21.854 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:25:21.854 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:25:21.854 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:25:21.854 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:25:22.113 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:25:22.113 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:25:22.113 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:25:22.113 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:25:22.113 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:25:22.113 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:25:22.113 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:25:22.113 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:25:22.113 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.113 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:25:22.373 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:25:22.631 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:25:22.631 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:25:22.631 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:25:22.631 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:25:22.631 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:25:22.631 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:25:22.891 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:23.151 rmmod nvme_tcp 00:25:23.151 rmmod nvme_fabrics 00:25:23.151 rmmod nvme_keyring 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 580733 ']' 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 580733 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 580733 ']' 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 580733 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 580733 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 580733' 00:25:23.151 killing process with pid 580733 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 580733 00:25:23.151 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 580733 00:25:23.410 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.410 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.410 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.410 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:25:23.410 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:25:23.410 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.410 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.410 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.411 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.411 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.411 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.411 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.947 00:25:25.947 real 0m10.640s 00:25:25.947 user 0m12.412s 00:25:25.947 sys 0m5.006s 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:25:25.947 ************************************ 00:25:25.947 END TEST nvmf_referrals 00:25:25.947 ************************************ 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:25.947 ************************************ 00:25:25.947 START TEST nvmf_connect_disconnect 00:25:25.947 ************************************ 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:25:25.947 * Looking for test storage... 00:25:25.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.947 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:25.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.948 --rc genhtml_branch_coverage=1 00:25:25.948 --rc genhtml_function_coverage=1 00:25:25.948 --rc genhtml_legend=1 00:25:25.948 --rc geninfo_all_blocks=1 00:25:25.948 --rc geninfo_unexecuted_blocks=1 00:25:25.948 00:25:25.948 ' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:25.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.948 --rc genhtml_branch_coverage=1 00:25:25.948 --rc genhtml_function_coverage=1 00:25:25.948 --rc genhtml_legend=1 00:25:25.948 --rc geninfo_all_blocks=1 00:25:25.948 --rc geninfo_unexecuted_blocks=1 00:25:25.948 00:25:25.948 ' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:25.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.948 --rc genhtml_branch_coverage=1 00:25:25.948 --rc genhtml_function_coverage=1 00:25:25.948 --rc genhtml_legend=1 00:25:25.948 --rc geninfo_all_blocks=1 00:25:25.948 --rc geninfo_unexecuted_blocks=1 00:25:25.948 00:25:25.948 ' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:25.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.948 --rc genhtml_branch_coverage=1 00:25:25.948 --rc genhtml_function_coverage=1 00:25:25.948 --rc genhtml_legend=1 00:25:25.948 --rc geninfo_all_blocks=1 00:25:25.948 --rc geninfo_unexecuted_blocks=1 00:25:25.948 00:25:25.948 ' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:25.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:25:25.948 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:31.220 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.220 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.220 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.220 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.220 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.220 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.220 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.220 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.220 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.220 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:31.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:31.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:31.221 Found net devices under 0000:86:00.0: cvl_0_0 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:31.221 Found net devices under 0000:86:00.1: cvl_0_1 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.221 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:31.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:25:31.222 00:25:31.222 --- 10.0.0.2 ping statistics --- 00:25:31.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.222 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:25:31.222 00:25:31.222 --- 10.0.0.1 ping statistics --- 00:25:31.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.222 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=584804 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 584804 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 584804 ']' 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.222 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:31.482 [2024-12-09 10:36:32.436789] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:25:31.482 [2024-12-09 10:36:32.436834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.482 [2024-12-09 10:36:32.506548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.482 [2024-12-09 10:36:32.547046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.482 [2024-12-09 10:36:32.547087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.482 [2024-12-09 10:36:32.547094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.482 [2024-12-09 10:36:32.547100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.482 [2024-12-09 10:36:32.547104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.482 [2024-12-09 10:36:32.548700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.482 [2024-12-09 10:36:32.548795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.482 [2024-12-09 10:36:32.548882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.482 [2024-12-09 10:36:32.548883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.482 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.482 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:25:31.482 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:31.482 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:31.482 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:31.741 [2024-12-09 10:36:32.699242] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:31.741 [2024-12-09 10:36:32.774498] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:25:31.741 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:25:35.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:38.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:41.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:44.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:48.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:48.215 rmmod nvme_tcp 00:25:48.215 rmmod nvme_fabrics 00:25:48.215 rmmod nvme_keyring 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 584804 ']' 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 584804 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 584804 ']' 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 584804 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584804 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584804' 00:25:48.215 killing process with pid 584804 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 584804 00:25:48.215 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 584804 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.473 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.379 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:50.379 00:25:50.379 real 0m24.901s 00:25:50.379 user 1m8.598s 00:25:50.379 sys 0m5.493s 00:25:50.379 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:50.379 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:50.379 ************************************ 00:25:50.379 END TEST nvmf_connect_disconnect 00:25:50.379 ************************************ 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:50.639 ************************************ 00:25:50.639 START TEST nvmf_multitarget 00:25:50.639 ************************************ 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:25:50.639 * Looking for test storage... 00:25:50.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:25:50.639 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:50.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.640 --rc genhtml_branch_coverage=1 00:25:50.640 --rc genhtml_function_coverage=1 00:25:50.640 --rc genhtml_legend=1 00:25:50.640 --rc geninfo_all_blocks=1 00:25:50.640 --rc geninfo_unexecuted_blocks=1 00:25:50.640 00:25:50.640 ' 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:50.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.640 --rc genhtml_branch_coverage=1 00:25:50.640 --rc genhtml_function_coverage=1 00:25:50.640 --rc genhtml_legend=1 00:25:50.640 --rc geninfo_all_blocks=1 00:25:50.640 --rc geninfo_unexecuted_blocks=1 00:25:50.640 00:25:50.640 ' 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:50.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.640 --rc genhtml_branch_coverage=1 00:25:50.640 --rc genhtml_function_coverage=1 00:25:50.640 --rc genhtml_legend=1 00:25:50.640 --rc geninfo_all_blocks=1 00:25:50.640 --rc geninfo_unexecuted_blocks=1 00:25:50.640 00:25:50.640 ' 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:50.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.640 --rc genhtml_branch_coverage=1 00:25:50.640 --rc genhtml_function_coverage=1 00:25:50.640 --rc genhtml_legend=1 00:25:50.640 --rc geninfo_all_blocks=1 00:25:50.640 --rc geninfo_unexecuted_blocks=1 00:25:50.640 00:25:50.640 ' 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.640 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.900 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:56.167 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.167 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:56.168 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:56.168 Found net devices under 0000:86:00.0: cvl_0_0 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:56.168 Found net devices under 0000:86:00.1: cvl_0_1 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:56.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:25:56.168 00:25:56.168 --- 10.0.0.2 ping statistics --- 00:25:56.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.168 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:25:56.168 00:25:56.168 --- 10.0.0.1 ping statistics --- 00:25:56.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.168 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:56.168 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=591013 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 591013 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 591013 ']' 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.427 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:25:56.427 [2024-12-09 10:36:57.427377] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:25:56.427 [2024-12-09 10:36:57.427430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.427 [2024-12-09 10:36:57.497991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:56.427 [2024-12-09 10:36:57.542822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.427 [2024-12-09 10:36:57.542860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.428 [2024-12-09 10:36:57.542868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.428 [2024-12-09 10:36:57.542874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.428 [2024-12-09 10:36:57.542879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.428 [2024-12-09 10:36:57.544464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.428 [2024-12-09 10:36:57.544562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.428 [2024-12-09 10:36:57.544652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:56.428 [2024-12-09 10:36:57.544653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:25:56.686 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:25:56.945 "nvmf_tgt_1" 00:25:56.945 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:25:56.945 "nvmf_tgt_2" 00:25:56.945 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:25:56.945 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:25:56.945 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:25:56.945 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:25:57.203 true 00:25:57.203 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:25:57.203 true 00:25:57.204 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:25:57.204 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:57.463 rmmod nvme_tcp 00:25:57.463 rmmod nvme_fabrics 00:25:57.463 rmmod nvme_keyring 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 591013 ']' 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 591013 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 591013 ']' 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 591013 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 591013 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 591013' 00:25:57.463 killing process with pid 591013 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 591013 00:25:57.463 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 591013 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.722 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.260 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:00.260 00:26:00.260 real 0m9.231s 00:26:00.260 user 0m7.204s 00:26:00.260 sys 0m4.541s 00:26:00.260 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.260 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:26:00.260 ************************************ 00:26:00.260 END TEST nvmf_multitarget 00:26:00.260 ************************************ 00:26:00.260 10:37:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:26:00.260 10:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.260 10:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.260 10:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:00.260 ************************************ 00:26:00.260 START TEST nvmf_rpc 00:26:00.260 ************************************ 00:26:00.260 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:26:00.260 * Looking for test storage... 00:26:00.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.260 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:00.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.260 --rc genhtml_branch_coverage=1 00:26:00.260 --rc genhtml_function_coverage=1 00:26:00.260 --rc genhtml_legend=1 00:26:00.260 --rc geninfo_all_blocks=1 00:26:00.260 --rc geninfo_unexecuted_blocks=1 00:26:00.260 00:26:00.261 ' 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:00.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.261 --rc genhtml_branch_coverage=1 00:26:00.261 --rc genhtml_function_coverage=1 00:26:00.261 --rc genhtml_legend=1 00:26:00.261 --rc geninfo_all_blocks=1 00:26:00.261 --rc geninfo_unexecuted_blocks=1 00:26:00.261 00:26:00.261 ' 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:00.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.261 --rc genhtml_branch_coverage=1 00:26:00.261 --rc genhtml_function_coverage=1 00:26:00.261 --rc genhtml_legend=1 00:26:00.261 --rc geninfo_all_blocks=1 00:26:00.261 --rc geninfo_unexecuted_blocks=1 00:26:00.261 00:26:00.261 ' 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:00.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.261 --rc genhtml_branch_coverage=1 00:26:00.261 --rc genhtml_function_coverage=1 00:26:00.261 --rc genhtml_legend=1 00:26:00.261 --rc geninfo_all_blocks=1 00:26:00.261 --rc geninfo_unexecuted_blocks=1 00:26:00.261 00:26:00.261 ' 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:00.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:00.261 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:05.535 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:05.535 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:05.535 Found net devices under 0000:86:00.0: cvl_0_0 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.535 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:05.536 Found net devices under 0000:86:00.1: cvl_0_1 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:05.536 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:05.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:26:05.795 00:26:05.795 --- 10.0.0.2 ping statistics --- 00:26:05.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.795 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:26:05.795 00:26:05.795 --- 10.0.0.1 ping statistics --- 00:26:05.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.795 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=594757 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 594757 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 594757 ']' 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.795 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:05.795 [2024-12-09 10:37:06.861224] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:26:05.795 [2024-12-09 10:37:06.861272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.795 [2024-12-09 10:37:06.931476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:06.054 [2024-12-09 10:37:06.974752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.054 [2024-12-09 10:37:06.974788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.054 [2024-12-09 10:37:06.974796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.054 [2024-12-09 10:37:06.974802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.054 [2024-12-09 10:37:06.974806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.054 [2024-12-09 10:37:06.976228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.054 [2024-12-09 10:37:06.976323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.054 [2024-12-09 10:37:06.976389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.054 [2024-12-09 10:37:06.976390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:26:06.054 "tick_rate": 2300000000, 00:26:06.054 "poll_groups": [ 00:26:06.054 { 00:26:06.054 "name": "nvmf_tgt_poll_group_000", 00:26:06.054 "admin_qpairs": 0, 00:26:06.054 "io_qpairs": 0, 00:26:06.054 "current_admin_qpairs": 0, 00:26:06.054 "current_io_qpairs": 0, 00:26:06.054 "pending_bdev_io": 0, 00:26:06.054 "completed_nvme_io": 0, 00:26:06.054 "transports": [] 00:26:06.054 }, 00:26:06.054 { 00:26:06.054 "name": "nvmf_tgt_poll_group_001", 00:26:06.054 "admin_qpairs": 0, 00:26:06.054 "io_qpairs": 0, 00:26:06.054 "current_admin_qpairs": 0, 00:26:06.054 "current_io_qpairs": 0, 00:26:06.054 "pending_bdev_io": 0, 00:26:06.054 "completed_nvme_io": 0, 00:26:06.054 "transports": [] 00:26:06.054 }, 00:26:06.054 { 00:26:06.054 "name": "nvmf_tgt_poll_group_002", 00:26:06.054 "admin_qpairs": 0, 00:26:06.054 "io_qpairs": 0, 00:26:06.054 "current_admin_qpairs": 0, 00:26:06.054 "current_io_qpairs": 0, 00:26:06.054 "pending_bdev_io": 0, 00:26:06.054 "completed_nvme_io": 0, 00:26:06.054 "transports": [] 00:26:06.054 }, 00:26:06.054 { 00:26:06.054 "name": "nvmf_tgt_poll_group_003", 00:26:06.054 "admin_qpairs": 0, 00:26:06.054 "io_qpairs": 0, 00:26:06.054 "current_admin_qpairs": 0, 00:26:06.054 "current_io_qpairs": 0, 00:26:06.054 "pending_bdev_io": 0, 00:26:06.054 "completed_nvme_io": 0, 00:26:06.054 "transports": [] 00:26:06.054 } 00:26:06.054 ] 00:26:06.054 }' 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.054 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.313 [2024-12-09 10:37:07.230925] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:26:06.313 "tick_rate": 2300000000, 00:26:06.313 "poll_groups": [ 00:26:06.313 { 00:26:06.313 "name": "nvmf_tgt_poll_group_000", 00:26:06.313 "admin_qpairs": 0, 00:26:06.313 "io_qpairs": 0, 00:26:06.313 "current_admin_qpairs": 0, 00:26:06.313 "current_io_qpairs": 0, 00:26:06.313 "pending_bdev_io": 0, 00:26:06.313 "completed_nvme_io": 0, 00:26:06.313 "transports": [ 00:26:06.313 { 00:26:06.313 "trtype": "TCP" 00:26:06.313 } 00:26:06.313 ] 00:26:06.313 }, 00:26:06.313 { 00:26:06.313 "name": "nvmf_tgt_poll_group_001", 00:26:06.313 "admin_qpairs": 0, 00:26:06.313 "io_qpairs": 0, 00:26:06.313 "current_admin_qpairs": 0, 00:26:06.313 "current_io_qpairs": 0, 00:26:06.313 "pending_bdev_io": 0, 00:26:06.313 "completed_nvme_io": 0, 00:26:06.313 "transports": [ 00:26:06.313 { 00:26:06.313 "trtype": "TCP" 00:26:06.313 } 00:26:06.313 ] 00:26:06.313 }, 00:26:06.313 { 00:26:06.313 "name": "nvmf_tgt_poll_group_002", 00:26:06.313 "admin_qpairs": 0, 00:26:06.313 "io_qpairs": 0, 00:26:06.313 "current_admin_qpairs": 0, 00:26:06.313 "current_io_qpairs": 0, 00:26:06.313 "pending_bdev_io": 0, 00:26:06.313 "completed_nvme_io": 0, 00:26:06.313 "transports": [ 00:26:06.313 { 00:26:06.313 "trtype": "TCP" 00:26:06.313 } 00:26:06.313 ] 00:26:06.313 }, 00:26:06.313 { 00:26:06.313 "name": "nvmf_tgt_poll_group_003", 00:26:06.313 "admin_qpairs": 0, 00:26:06.313 "io_qpairs": 0, 00:26:06.313 "current_admin_qpairs": 0, 00:26:06.313 "current_io_qpairs": 0, 00:26:06.313 "pending_bdev_io": 0, 00:26:06.313 "completed_nvme_io": 0, 00:26:06.313 "transports": [ 00:26:06.313 { 00:26:06.313 "trtype": "TCP" 00:26:06.313 } 00:26:06.313 ] 00:26:06.313 } 00:26:06.313 ] 00:26:06.313 }' 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.313 Malloc1 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.313 [2024-12-09 10:37:07.423219] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.313 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:26:06.314 [2024-12-09 10:37:07.451694] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:26:06.314 Failed to write to /dev/nvme-fabrics: Input/output error 00:26:06.314 could not add new controller: failed to write to nvme-fabrics device 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.314 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.572 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.572 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:07.507 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:26:07.507 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:07.507 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.507 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:07.507 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:10.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:10.037 [2024-12-09 10:37:10.817878] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:26:10.037 Failed to write to /dev/nvme-fabrics: Input/output error 00:26:10.037 could not add new controller: failed to write to nvme-fabrics device 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.037 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:10.969 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:26:10.969 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:10.969 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.969 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:10.969 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:26:12.863 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:12.863 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:13.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 [2024-12-09 10:37:14.265094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.121 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:14.490 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:26:14.490 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:14.490 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:14.490 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:14.490 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:16.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:16.391 [2024-12-09 10:37:17.561229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.391 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:26:16.650 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.650 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:16.650 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.650 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:16.650 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.650 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:16.650 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.650 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:17.688 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:26:17.688 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:17.688 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.688 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:17.688 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:26:19.590 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:19.590 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:19.590 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:19.590 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:19.590 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:19.590 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:26:19.590 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:19.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:19.848 [2024-12-09 10:37:20.876149] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.848 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:21.222 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:26:21.222 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:21.222 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.222 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:21.222 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:23.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:23.114 [2024-12-09 10:37:24.268189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.114 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:23.371 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.371 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:24.302 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:26:24.302 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:24.302 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.302 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:24.302 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:26.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:26.830 [2024-12-09 10:37:27.566534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.830 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:27.765 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:26:27.765 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:27.765 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.765 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:27.765 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:29.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.662 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 [2024-12-09 10:37:30.873034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 [2024-12-09 10:37:30.921098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 [2024-12-09 10:37:30.969239] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 [2024-12-09 10:37:31.017421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:29.921 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.922 [2024-12-09 10:37:31.065574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.922 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.180 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:30.180 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.180 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:26:30.180 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.180 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:30.180 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.180 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:26:30.180 "tick_rate": 2300000000, 00:26:30.180 "poll_groups": [ 00:26:30.180 { 00:26:30.180 "name": "nvmf_tgt_poll_group_000", 00:26:30.180 "admin_qpairs": 2, 00:26:30.180 "io_qpairs": 168, 00:26:30.180 "current_admin_qpairs": 0, 00:26:30.180 "current_io_qpairs": 0, 00:26:30.180 "pending_bdev_io": 0, 00:26:30.180 "completed_nvme_io": 268, 00:26:30.180 "transports": [ 00:26:30.180 { 00:26:30.180 "trtype": "TCP" 00:26:30.181 } 00:26:30.181 ] 00:26:30.181 }, 00:26:30.181 { 00:26:30.181 "name": "nvmf_tgt_poll_group_001", 00:26:30.181 "admin_qpairs": 2, 00:26:30.181 "io_qpairs": 168, 00:26:30.181 "current_admin_qpairs": 0, 00:26:30.181 "current_io_qpairs": 0, 00:26:30.181 "pending_bdev_io": 0, 00:26:30.181 "completed_nvme_io": 267, 00:26:30.181 "transports": [ 00:26:30.181 { 00:26:30.181 "trtype": "TCP" 00:26:30.181 } 00:26:30.181 ] 00:26:30.181 }, 00:26:30.181 { 00:26:30.181 "name": "nvmf_tgt_poll_group_002", 00:26:30.181 "admin_qpairs": 1, 00:26:30.181 "io_qpairs": 168, 00:26:30.181 "current_admin_qpairs": 0, 00:26:30.181 "current_io_qpairs": 0, 00:26:30.181 "pending_bdev_io": 0, 00:26:30.181 "completed_nvme_io": 170, 00:26:30.181 "transports": [ 00:26:30.181 { 00:26:30.181 "trtype": "TCP" 00:26:30.181 } 00:26:30.181 ] 00:26:30.181 }, 00:26:30.181 { 00:26:30.181 "name": "nvmf_tgt_poll_group_003", 00:26:30.181 "admin_qpairs": 2, 00:26:30.181 "io_qpairs": 168, 00:26:30.181 "current_admin_qpairs": 0, 00:26:30.181 "current_io_qpairs": 0, 00:26:30.181 "pending_bdev_io": 0, 00:26:30.181 "completed_nvme_io": 317, 00:26:30.181 "transports": [ 00:26:30.181 { 00:26:30.181 "trtype": "TCP" 00:26:30.181 } 00:26:30.181 ] 00:26:30.181 } 00:26:30.181 ] 00:26:30.181 }' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:30.181 rmmod nvme_tcp 00:26:30.181 rmmod nvme_fabrics 00:26:30.181 rmmod nvme_keyring 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 594757 ']' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 594757 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 594757 ']' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 594757 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 594757 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 594757' 00:26:30.181 killing process with pid 594757 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 594757 00:26:30.181 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 594757 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.440 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:32.976 00:26:32.976 real 0m32.696s 00:26:32.976 user 1m39.387s 00:26:32.976 sys 0m6.285s 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:32.976 ************************************ 00:26:32.976 END TEST nvmf_rpc 00:26:32.976 ************************************ 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:32.976 ************************************ 00:26:32.976 START TEST nvmf_invalid 00:26:32.976 ************************************ 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:26:32.976 * Looking for test storage... 00:26:32.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:26:32.976 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:32.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.977 --rc genhtml_branch_coverage=1 00:26:32.977 --rc genhtml_function_coverage=1 00:26:32.977 --rc genhtml_legend=1 00:26:32.977 --rc geninfo_all_blocks=1 00:26:32.977 --rc geninfo_unexecuted_blocks=1 00:26:32.977 00:26:32.977 ' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:32.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.977 --rc genhtml_branch_coverage=1 00:26:32.977 --rc genhtml_function_coverage=1 00:26:32.977 --rc genhtml_legend=1 00:26:32.977 --rc geninfo_all_blocks=1 00:26:32.977 --rc geninfo_unexecuted_blocks=1 00:26:32.977 00:26:32.977 ' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:32.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.977 --rc genhtml_branch_coverage=1 00:26:32.977 --rc genhtml_function_coverage=1 00:26:32.977 --rc genhtml_legend=1 00:26:32.977 --rc geninfo_all_blocks=1 00:26:32.977 --rc geninfo_unexecuted_blocks=1 00:26:32.977 00:26:32.977 ' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:32.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.977 --rc genhtml_branch_coverage=1 00:26:32.977 --rc genhtml_function_coverage=1 00:26:32.977 --rc genhtml_legend=1 00:26:32.977 --rc geninfo_all_blocks=1 00:26:32.977 --rc geninfo_unexecuted_blocks=1 00:26:32.977 00:26:32.977 ' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:26:32.977 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.251 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:38.252 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:38.252 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:38.252 Found net devices under 0000:86:00.0: cvl_0_0 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:38.252 Found net devices under 0000:86:00.1: cvl_0_1 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.252 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:38.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:26:38.512 00:26:38.512 --- 10.0.0.2 ping statistics --- 00:26:38.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.512 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:26:38.512 00:26:38.512 --- 10.0.0.1 ping statistics --- 00:26:38.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.512 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=602560 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 602560 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 602560 ']' 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.512 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:26:38.771 [2024-12-09 10:37:39.697693] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:26:38.771 [2024-12-09 10:37:39.697740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.771 [2024-12-09 10:37:39.767033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:38.771 [2024-12-09 10:37:39.807218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.771 [2024-12-09 10:37:39.807258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.771 [2024-12-09 10:37:39.807265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.771 [2024-12-09 10:37:39.807271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.771 [2024-12-09 10:37:39.807276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.771 [2024-12-09 10:37:39.808822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.771 [2024-12-09 10:37:39.808918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.771 [2024-12-09 10:37:39.809015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:38.771 [2024-12-09 10:37:39.809025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.771 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.771 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:26:38.771 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:38.771 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:38.771 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:26:39.030 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.030 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:39.030 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21777 00:26:39.030 [2024-12-09 10:37:40.127884] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:26:39.030 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:26:39.030 { 00:26:39.030 "nqn": "nqn.2016-06.io.spdk:cnode21777", 00:26:39.030 "tgt_name": "foobar", 00:26:39.030 "method": "nvmf_create_subsystem", 00:26:39.030 "req_id": 1 00:26:39.030 } 00:26:39.030 Got JSON-RPC error response 00:26:39.030 response: 00:26:39.030 { 00:26:39.030 "code": -32603, 00:26:39.030 "message": "Unable to find target foobar" 00:26:39.030 }' 00:26:39.030 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:26:39.030 { 00:26:39.030 "nqn": "nqn.2016-06.io.spdk:cnode21777", 00:26:39.030 "tgt_name": "foobar", 00:26:39.030 "method": "nvmf_create_subsystem", 00:26:39.030 "req_id": 1 00:26:39.030 } 00:26:39.030 Got JSON-RPC error response 00:26:39.030 response: 00:26:39.030 { 00:26:39.030 "code": -32603, 00:26:39.030 "message": "Unable to find target foobar" 00:26:39.030 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:26:39.030 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:26:39.030 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9832 00:26:39.287 [2024-12-09 10:37:40.332569] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9832: invalid serial number 'SPDKISFASTANDAWESOME' 00:26:39.287 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:26:39.287 { 00:26:39.287 "nqn": "nqn.2016-06.io.spdk:cnode9832", 00:26:39.287 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:26:39.287 "method": "nvmf_create_subsystem", 00:26:39.287 "req_id": 1 00:26:39.287 } 00:26:39.287 Got JSON-RPC error response 00:26:39.287 response: 00:26:39.287 { 00:26:39.287 "code": -32602, 00:26:39.288 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:26:39.288 }' 00:26:39.288 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:26:39.288 { 00:26:39.288 "nqn": "nqn.2016-06.io.spdk:cnode9832", 00:26:39.288 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:26:39.288 "method": "nvmf_create_subsystem", 00:26:39.288 "req_id": 1 00:26:39.288 } 00:26:39.288 Got JSON-RPC error response 00:26:39.288 response: 00:26:39.288 { 00:26:39.288 "code": -32602, 00:26:39.288 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:26:39.288 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:26:39.288 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:26:39.288 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30791 00:26:39.546 [2024-12-09 10:37:40.541246] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30791: invalid model number 'SPDK_Controller' 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:26:39.546 { 00:26:39.546 "nqn": "nqn.2016-06.io.spdk:cnode30791", 00:26:39.546 "model_number": "SPDK_Controller\u001f", 00:26:39.546 "method": "nvmf_create_subsystem", 00:26:39.546 "req_id": 1 00:26:39.546 } 00:26:39.546 Got JSON-RPC error response 00:26:39.546 response: 00:26:39.546 { 00:26:39.546 "code": -32602, 00:26:39.546 "message": "Invalid MN SPDK_Controller\u001f" 00:26:39.546 }' 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:26:39.546 { 00:26:39.546 "nqn": "nqn.2016-06.io.spdk:cnode30791", 00:26:39.546 "model_number": "SPDK_Controller\u001f", 00:26:39.546 "method": "nvmf_create_subsystem", 00:26:39.546 "req_id": 1 00:26:39.546 } 00:26:39.546 Got JSON-RPC error response 00:26:39.546 response: 00:26:39.546 { 00:26:39.546 "code": -32602, 00:26:39.546 "message": "Invalid MN SPDK_Controller\u001f" 00:26:39.546 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.546 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:26:39.547 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'dkq}mA0.)vtCtr]GM"0Zur?4e1G0+Kd/<{z49s>|k"WZ' 00:26:40.068 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'R6JE7O'\''zBx--Vi)>Zur?4e1G0+Kd/<{z49s>|k"WZ' nqn.2016-06.io.spdk:cnode25473 00:26:40.326 [2024-12-09 10:37:41.359961] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25473: invalid model number 'R6JE7O'zBx--Vi)>Zur?4e1G0+Kd/<{z49s>|k"WZ' 00:26:40.326 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:26:40.326 { 00:26:40.326 "nqn": "nqn.2016-06.io.spdk:cnode25473", 00:26:40.326 "model_number": "R6JE7O'\''zBx--Vi)>Zur?4e1G0+Kd/<{z49s>|k\"WZ", 00:26:40.326 "method": "nvmf_create_subsystem", 00:26:40.326 "req_id": 1 00:26:40.326 } 00:26:40.326 Got JSON-RPC error response 00:26:40.326 response: 00:26:40.326 { 00:26:40.326 "code": -32602, 00:26:40.326 "message": "Invalid MN R6JE7O'\''zBx--Vi)>Zur?4e1G0+Kd/<{z49s>|k\"WZ" 00:26:40.326 }' 00:26:40.326 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:26:40.326 { 00:26:40.326 "nqn": "nqn.2016-06.io.spdk:cnode25473", 00:26:40.326 "model_number": "R6JE7O'zBx--Vi)>Zur?4e1G0+Kd/<{z49s>|k\"WZ", 00:26:40.326 "method": "nvmf_create_subsystem", 00:26:40.326 "req_id": 1 00:26:40.326 } 00:26:40.326 Got JSON-RPC error response 00:26:40.326 response: 00:26:40.326 { 00:26:40.326 "code": -32602, 00:26:40.326 "message": "Invalid MN R6JE7O'zBx--Vi)>Zur?4e1G0+Kd/<{z49s>|k\"WZ" 00:26:40.326 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:26:40.326 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:26:40.584 [2024-12-09 10:37:41.564713] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.584 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:26:40.843 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:26:40.843 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:26:40.843 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:26:40.843 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:26:40.843 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:26:40.843 [2024-12-09 10:37:41.974096] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:26:40.843 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:26:40.843 { 00:26:40.843 "nqn": "nqn.2016-06.io.spdk:cnode", 00:26:40.843 "listen_address": { 00:26:40.843 "trtype": "tcp", 00:26:40.843 "traddr": "", 00:26:40.843 "trsvcid": "4421" 00:26:40.843 }, 00:26:40.843 "method": "nvmf_subsystem_remove_listener", 00:26:40.843 "req_id": 1 00:26:40.843 } 00:26:40.843 Got JSON-RPC error response 00:26:40.843 response: 00:26:40.843 { 00:26:40.843 "code": -32602, 00:26:40.843 "message": "Invalid parameters" 00:26:40.843 }' 00:26:40.843 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:26:40.843 { 00:26:40.843 "nqn": "nqn.2016-06.io.spdk:cnode", 00:26:40.843 "listen_address": { 00:26:40.843 "trtype": "tcp", 00:26:40.843 "traddr": "", 00:26:40.843 "trsvcid": "4421" 00:26:40.843 }, 00:26:40.843 "method": "nvmf_subsystem_remove_listener", 00:26:40.843 "req_id": 1 00:26:40.843 } 00:26:40.843 Got JSON-RPC error response 00:26:40.843 response: 00:26:40.843 { 00:26:40.843 "code": -32602, 00:26:40.843 "message": "Invalid parameters" 00:26:40.843 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:26:40.843 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27481 -i 0 00:26:41.102 [2024-12-09 10:37:42.186774] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27481: invalid cntlid range [0-65519] 00:26:41.102 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:26:41.102 { 00:26:41.102 "nqn": "nqn.2016-06.io.spdk:cnode27481", 00:26:41.102 "min_cntlid": 0, 00:26:41.102 "method": "nvmf_create_subsystem", 00:26:41.102 "req_id": 1 00:26:41.102 } 00:26:41.102 Got JSON-RPC error response 00:26:41.102 response: 00:26:41.102 { 00:26:41.102 "code": -32602, 00:26:41.102 "message": "Invalid cntlid range [0-65519]" 00:26:41.102 }' 00:26:41.102 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:26:41.102 { 00:26:41.102 "nqn": "nqn.2016-06.io.spdk:cnode27481", 00:26:41.102 "min_cntlid": 0, 00:26:41.102 "method": "nvmf_create_subsystem", 00:26:41.102 "req_id": 1 00:26:41.102 } 00:26:41.102 Got JSON-RPC error response 00:26:41.102 response: 00:26:41.102 { 00:26:41.102 "code": -32602, 00:26:41.102 "message": "Invalid cntlid range [0-65519]" 00:26:41.102 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:26:41.102 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9995 -i 65520 00:26:41.361 [2024-12-09 10:37:42.403517] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9995: invalid cntlid range [65520-65519] 00:26:41.361 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:26:41.361 { 00:26:41.361 "nqn": "nqn.2016-06.io.spdk:cnode9995", 00:26:41.361 "min_cntlid": 65520, 00:26:41.361 "method": "nvmf_create_subsystem", 00:26:41.361 "req_id": 1 00:26:41.361 } 00:26:41.361 Got JSON-RPC error response 00:26:41.361 response: 00:26:41.361 { 00:26:41.361 "code": -32602, 00:26:41.361 "message": "Invalid cntlid range [65520-65519]" 00:26:41.361 }' 00:26:41.361 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:26:41.361 { 00:26:41.361 "nqn": "nqn.2016-06.io.spdk:cnode9995", 00:26:41.361 "min_cntlid": 65520, 00:26:41.361 "method": "nvmf_create_subsystem", 00:26:41.361 "req_id": 1 00:26:41.361 } 00:26:41.361 Got JSON-RPC error response 00:26:41.361 response: 00:26:41.361 { 00:26:41.361 "code": -32602, 00:26:41.361 "message": "Invalid cntlid range [65520-65519]" 00:26:41.361 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:26:41.361 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2108 -I 0 00:26:41.619 [2024-12-09 10:37:42.612242] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2108: invalid cntlid range [1-0] 00:26:41.619 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:26:41.619 { 00:26:41.619 "nqn": "nqn.2016-06.io.spdk:cnode2108", 00:26:41.619 "max_cntlid": 0, 00:26:41.619 "method": "nvmf_create_subsystem", 00:26:41.619 "req_id": 1 00:26:41.619 } 00:26:41.619 Got JSON-RPC error response 00:26:41.619 response: 00:26:41.619 { 00:26:41.619 "code": -32602, 00:26:41.619 "message": "Invalid cntlid range [1-0]" 00:26:41.619 }' 00:26:41.619 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:26:41.619 { 00:26:41.619 "nqn": "nqn.2016-06.io.spdk:cnode2108", 00:26:41.619 "max_cntlid": 0, 00:26:41.619 "method": "nvmf_create_subsystem", 00:26:41.619 "req_id": 1 00:26:41.619 } 00:26:41.619 Got JSON-RPC error response 00:26:41.619 response: 00:26:41.619 { 00:26:41.619 "code": -32602, 00:26:41.619 "message": "Invalid cntlid range [1-0]" 00:26:41.619 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:26:41.619 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28981 -I 65520 00:26:41.877 [2024-12-09 10:37:42.804910] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28981: invalid cntlid range [1-65520] 00:26:41.877 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:26:41.877 { 00:26:41.877 "nqn": "nqn.2016-06.io.spdk:cnode28981", 00:26:41.877 "max_cntlid": 65520, 00:26:41.877 "method": "nvmf_create_subsystem", 00:26:41.877 "req_id": 1 00:26:41.877 } 00:26:41.877 Got JSON-RPC error response 00:26:41.877 response: 00:26:41.877 { 00:26:41.877 "code": -32602, 00:26:41.877 "message": "Invalid cntlid range [1-65520]" 00:26:41.877 }' 00:26:41.877 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:26:41.877 { 00:26:41.877 "nqn": "nqn.2016-06.io.spdk:cnode28981", 00:26:41.877 "max_cntlid": 65520, 00:26:41.877 "method": "nvmf_create_subsystem", 00:26:41.877 "req_id": 1 00:26:41.877 } 00:26:41.877 Got JSON-RPC error response 00:26:41.877 response: 00:26:41.877 { 00:26:41.877 "code": -32602, 00:26:41.877 "message": "Invalid cntlid range [1-65520]" 00:26:41.877 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:26:41.877 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19553 -i 6 -I 5 00:26:41.877 [2024-12-09 10:37:43.001580] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19553: invalid cntlid range [6-5] 00:26:41.877 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:26:41.877 { 00:26:41.877 "nqn": "nqn.2016-06.io.spdk:cnode19553", 00:26:41.877 "min_cntlid": 6, 00:26:41.877 "max_cntlid": 5, 00:26:41.877 "method": "nvmf_create_subsystem", 00:26:41.877 "req_id": 1 00:26:41.877 } 00:26:41.877 Got JSON-RPC error response 00:26:41.877 response: 00:26:41.877 { 00:26:41.877 "code": -32602, 00:26:41.877 "message": "Invalid cntlid range [6-5]" 00:26:41.877 }' 00:26:41.877 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:26:41.877 { 00:26:41.877 "nqn": "nqn.2016-06.io.spdk:cnode19553", 00:26:41.877 "min_cntlid": 6, 00:26:41.877 "max_cntlid": 5, 00:26:41.877 "method": "nvmf_create_subsystem", 00:26:41.877 "req_id": 1 00:26:41.877 } 00:26:41.877 Got JSON-RPC error response 00:26:41.877 response: 00:26:41.877 { 00:26:41.877 "code": -32602, 00:26:41.877 "message": "Invalid cntlid range [6-5]" 00:26:41.877 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:26:41.877 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:26:42.136 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:26:42.136 { 00:26:42.136 "name": "foobar", 00:26:42.136 "method": "nvmf_delete_target", 00:26:42.136 "req_id": 1 00:26:42.136 } 00:26:42.136 Got JSON-RPC error response 00:26:42.136 response: 00:26:42.136 { 00:26:42.136 "code": -32602, 00:26:42.137 "message": "The specified target doesn'\''t exist, cannot delete it." 00:26:42.137 }' 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:26:42.137 { 00:26:42.137 "name": "foobar", 00:26:42.137 "method": "nvmf_delete_target", 00:26:42.137 "req_id": 1 00:26:42.137 } 00:26:42.137 Got JSON-RPC error response 00:26:42.137 response: 00:26:42.137 { 00:26:42.137 "code": -32602, 00:26:42.137 "message": "The specified target doesn't exist, cannot delete it." 00:26:42.137 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.137 rmmod nvme_tcp 00:26:42.137 rmmod nvme_fabrics 00:26:42.137 rmmod nvme_keyring 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 602560 ']' 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 602560 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 602560 ']' 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 602560 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602560 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602560' 00:26:42.137 killing process with pid 602560 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 602560 00:26:42.137 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 602560 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.396 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.926 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.926 00:26:44.926 real 0m11.844s 00:26:44.926 user 0m18.576s 00:26:44.926 sys 0m5.239s 00:26:44.926 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.926 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:26:44.926 ************************************ 00:26:44.926 END TEST nvmf_invalid 00:26:44.926 ************************************ 00:26:44.926 10:37:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:44.927 ************************************ 00:26:44.927 START TEST nvmf_connect_stress 00:26:44.927 ************************************ 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:26:44.927 * Looking for test storage... 00:26:44.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:44.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.927 --rc genhtml_branch_coverage=1 00:26:44.927 --rc genhtml_function_coverage=1 00:26:44.927 --rc genhtml_legend=1 00:26:44.927 --rc geninfo_all_blocks=1 00:26:44.927 --rc geninfo_unexecuted_blocks=1 00:26:44.927 00:26:44.927 ' 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:44.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.927 --rc genhtml_branch_coverage=1 00:26:44.927 --rc genhtml_function_coverage=1 00:26:44.927 --rc genhtml_legend=1 00:26:44.927 --rc geninfo_all_blocks=1 00:26:44.927 --rc geninfo_unexecuted_blocks=1 00:26:44.927 00:26:44.927 ' 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:44.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.927 --rc genhtml_branch_coverage=1 00:26:44.927 --rc genhtml_function_coverage=1 00:26:44.927 --rc genhtml_legend=1 00:26:44.927 --rc geninfo_all_blocks=1 00:26:44.927 --rc geninfo_unexecuted_blocks=1 00:26:44.927 00:26:44.927 ' 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:44.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.927 --rc genhtml_branch_coverage=1 00:26:44.927 --rc genhtml_function_coverage=1 00:26:44.927 --rc genhtml_legend=1 00:26:44.927 --rc geninfo_all_blocks=1 00:26:44.927 --rc geninfo_unexecuted_blocks=1 00:26:44.927 00:26:44.927 ' 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.927 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:44.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:44.928 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:50.198 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:50.198 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:50.198 Found net devices under 0000:86:00.0: cvl_0_0 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:50.198 Found net devices under 0000:86:00.1: cvl_0_1 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:50.198 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:50.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:26:50.455 00:26:50.455 --- 10.0.0.2 ping statistics --- 00:26:50.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.455 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:26:50.455 00:26:50.455 --- 10.0.0.1 ping statistics --- 00:26:50.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.455 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=606749 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 606749 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 606749 ']' 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:50.455 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:50.455 [2024-12-09 10:37:51.554429] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:26:50.455 [2024-12-09 10:37:51.554474] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.455 [2024-12-09 10:37:51.622679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:50.713 [2024-12-09 10:37:51.665895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.713 [2024-12-09 10:37:51.665927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.713 [2024-12-09 10:37:51.665935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.713 [2024-12-09 10:37:51.665941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.713 [2024-12-09 10:37:51.665947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.713 [2024-12-09 10:37:51.667140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.713 [2024-12-09 10:37:51.667225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.713 [2024-12-09 10:37:51.667227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:50.713 [2024-12-09 10:37:51.816959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:50.713 [2024-12-09 10:37:51.841210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:50.713 NULL1 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=606789 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.713 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.969 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:51.225 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.225 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:51.225 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:51.225 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.225 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:51.482 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.482 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:51.482 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:51.482 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.482 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:52.047 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.047 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:52.047 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:52.047 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.047 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:52.304 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.304 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:52.304 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:52.305 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.305 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:52.563 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.563 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:52.563 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:52.563 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.563 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:52.822 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.822 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:52.822 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:52.822 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.822 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:53.079 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.079 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:53.079 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:53.079 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.079 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:53.644 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.644 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:53.644 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:53.644 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.644 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:53.902 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.902 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:53.902 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:53.902 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.902 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:54.159 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.159 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:54.159 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:54.159 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.159 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:54.416 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.416 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:54.416 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:54.417 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.417 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:54.674 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.674 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:54.674 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:54.674 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.674 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:55.240 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.240 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:55.240 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:55.240 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.240 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:55.498 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.498 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:55.498 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:55.498 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.498 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:55.757 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.757 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:55.757 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:55.757 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.757 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:56.015 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.015 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:56.015 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:56.015 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.015 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:56.580 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.580 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:56.580 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:56.580 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.580 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:56.837 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.837 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:56.837 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:56.837 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.837 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:57.094 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.094 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:57.094 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:57.094 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.094 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:57.352 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.352 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:57.352 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:57.352 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.352 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:57.610 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.610 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:57.610 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:57.610 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.610 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:58.174 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.174 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:58.174 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:58.174 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.174 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:58.432 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.432 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:58.432 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:58.432 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.432 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:58.689 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.689 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:58.689 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:58.689 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.689 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:58.947 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.947 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:58.947 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:58.947 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.947 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:59.514 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.514 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:59.514 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:59.514 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.514 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:59.772 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.772 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:26:59.772 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:59.772 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.772 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:00.029 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.029 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:27:00.029 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:00.029 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.029 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:00.287 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.287 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:27:00.287 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:00.287 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.287 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:00.544 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.544 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:27:00.544 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:00.544 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.544 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:01.108 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 606789 00:27:01.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (606789) - No such process 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 606789 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:01.108 rmmod nvme_tcp 00:27:01.108 rmmod nvme_fabrics 00:27:01.108 rmmod nvme_keyring 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 606749 ']' 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 606749 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 606749 ']' 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 606749 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 606749 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 606749' 00:27:01.108 killing process with pid 606749 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 606749 00:27:01.108 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 606749 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.366 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.264 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:03.264 00:27:03.264 real 0m18.825s 00:27:03.264 user 0m39.293s 00:27:03.264 sys 0m8.379s 00:27:03.264 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.264 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:03.264 ************************************ 00:27:03.264 END TEST nvmf_connect_stress 00:27:03.264 ************************************ 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:03.522 ************************************ 00:27:03.522 START TEST nvmf_fused_ordering 00:27:03.522 ************************************ 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:27:03.522 * Looking for test storage... 00:27:03.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.522 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:03.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.523 --rc genhtml_branch_coverage=1 00:27:03.523 --rc genhtml_function_coverage=1 00:27:03.523 --rc genhtml_legend=1 00:27:03.523 --rc geninfo_all_blocks=1 00:27:03.523 --rc geninfo_unexecuted_blocks=1 00:27:03.523 00:27:03.523 ' 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:03.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.523 --rc genhtml_branch_coverage=1 00:27:03.523 --rc genhtml_function_coverage=1 00:27:03.523 --rc genhtml_legend=1 00:27:03.523 --rc geninfo_all_blocks=1 00:27:03.523 --rc geninfo_unexecuted_blocks=1 00:27:03.523 00:27:03.523 ' 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:03.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.523 --rc genhtml_branch_coverage=1 00:27:03.523 --rc genhtml_function_coverage=1 00:27:03.523 --rc genhtml_legend=1 00:27:03.523 --rc geninfo_all_blocks=1 00:27:03.523 --rc geninfo_unexecuted_blocks=1 00:27:03.523 00:27:03.523 ' 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:03.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.523 --rc genhtml_branch_coverage=1 00:27:03.523 --rc genhtml_function_coverage=1 00:27:03.523 --rc genhtml_legend=1 00:27:03.523 --rc geninfo_all_blocks=1 00:27:03.523 --rc geninfo_unexecuted_blocks=1 00:27:03.523 00:27:03.523 ' 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.523 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.781 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.781 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:03.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:27:03.782 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:09.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:09.224 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:09.224 Found net devices under 0000:86:00.0: cvl_0_0 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:09.224 Found net devices under 0000:86:00.1: cvl_0_1 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.224 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:09.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:27:09.225 00:27:09.225 --- 10.0.0.2 ping statistics --- 00:27:09.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.225 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:27:09.225 00:27:09.225 --- 10.0.0.1 ping statistics --- 00:27:09.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.225 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.225 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:09.482 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:27:09.482 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:09.482 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.483 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:09.483 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=612146 00:27:09.483 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 612146 00:27:09.483 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:09.483 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 612146 ']' 00:27:09.483 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.483 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.483 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.483 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.483 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:09.483 [2024-12-09 10:38:10.454536] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:27:09.483 [2024-12-09 10:38:10.454582] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.483 [2024-12-09 10:38:10.524509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.483 [2024-12-09 10:38:10.567834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.483 [2024-12-09 10:38:10.567868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.483 [2024-12-09 10:38:10.567875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.483 [2024-12-09 10:38:10.567882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.483 [2024-12-09 10:38:10.567889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.483 [2024-12-09 10:38:10.568448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:09.741 [2024-12-09 10:38:10.701789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:09.741 [2024-12-09 10:38:10.717955] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:09.741 NULL1 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.741 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:09.741 [2024-12-09 10:38:10.772087] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:27:09.741 [2024-12-09 10:38:10.772119] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612168 ] 00:27:09.999 Attached to nqn.2016-06.io.spdk:cnode1 00:27:09.999 Namespace ID: 1 size: 1GB 00:27:09.999 fused_ordering(0) 00:27:09.999 fused_ordering(1) 00:27:09.999 fused_ordering(2) 00:27:09.999 fused_ordering(3) 00:27:09.999 fused_ordering(4) 00:27:09.999 fused_ordering(5) 00:27:09.999 fused_ordering(6) 00:27:09.999 fused_ordering(7) 00:27:09.999 fused_ordering(8) 00:27:09.999 fused_ordering(9) 00:27:09.999 fused_ordering(10) 00:27:09.999 fused_ordering(11) 00:27:09.999 fused_ordering(12) 00:27:09.999 fused_ordering(13) 00:27:09.999 fused_ordering(14) 00:27:09.999 fused_ordering(15) 00:27:09.999 fused_ordering(16) 00:27:09.999 fused_ordering(17) 00:27:09.999 fused_ordering(18) 00:27:09.999 fused_ordering(19) 00:27:09.999 fused_ordering(20) 00:27:09.999 fused_ordering(21) 00:27:09.999 fused_ordering(22) 00:27:09.999 fused_ordering(23) 00:27:09.999 fused_ordering(24) 00:27:09.999 fused_ordering(25) 00:27:09.999 fused_ordering(26) 00:27:09.999 fused_ordering(27) 00:27:09.999 fused_ordering(28) 00:27:09.999 fused_ordering(29) 00:27:09.999 fused_ordering(30) 00:27:09.999 fused_ordering(31) 00:27:09.999 fused_ordering(32) 00:27:09.999 fused_ordering(33) 00:27:09.999 fused_ordering(34) 00:27:09.999 fused_ordering(35) 00:27:09.999 fused_ordering(36) 00:27:09.999 fused_ordering(37) 00:27:09.999 fused_ordering(38) 00:27:09.999 fused_ordering(39) 00:27:09.999 fused_ordering(40) 00:27:09.999 fused_ordering(41) 00:27:09.999 fused_ordering(42) 00:27:09.999 fused_ordering(43) 00:27:09.999 fused_ordering(44) 00:27:09.999 fused_ordering(45) 00:27:09.999 fused_ordering(46) 00:27:09.999 fused_ordering(47) 00:27:09.999 fused_ordering(48) 00:27:09.999 fused_ordering(49) 00:27:09.999 fused_ordering(50) 00:27:09.999 fused_ordering(51) 00:27:09.999 fused_ordering(52) 00:27:09.999 fused_ordering(53) 00:27:09.999 fused_ordering(54) 00:27:09.999 fused_ordering(55) 00:27:09.999 fused_ordering(56) 00:27:09.999 fused_ordering(57) 00:27:09.999 fused_ordering(58) 00:27:09.999 fused_ordering(59) 00:27:09.999 fused_ordering(60) 00:27:09.999 fused_ordering(61) 00:27:09.999 fused_ordering(62) 00:27:09.999 fused_ordering(63) 00:27:09.999 fused_ordering(64) 00:27:09.999 fused_ordering(65) 00:27:09.999 fused_ordering(66) 00:27:09.999 fused_ordering(67) 00:27:09.999 fused_ordering(68) 00:27:09.999 fused_ordering(69) 00:27:09.999 fused_ordering(70) 00:27:09.999 fused_ordering(71) 00:27:09.999 fused_ordering(72) 00:27:09.999 fused_ordering(73) 00:27:09.999 fused_ordering(74) 00:27:09.999 fused_ordering(75) 00:27:09.999 fused_ordering(76) 00:27:09.999 fused_ordering(77) 00:27:09.999 fused_ordering(78) 00:27:09.999 fused_ordering(79) 00:27:09.999 fused_ordering(80) 00:27:09.999 fused_ordering(81) 00:27:09.999 fused_ordering(82) 00:27:09.999 fused_ordering(83) 00:27:09.999 fused_ordering(84) 00:27:09.999 fused_ordering(85) 00:27:09.999 fused_ordering(86) 00:27:09.999 fused_ordering(87) 00:27:09.999 fused_ordering(88) 00:27:09.999 fused_ordering(89) 00:27:09.999 fused_ordering(90) 00:27:09.999 fused_ordering(91) 00:27:09.999 fused_ordering(92) 00:27:09.999 fused_ordering(93) 00:27:09.999 fused_ordering(94) 00:27:09.999 fused_ordering(95) 00:27:09.999 fused_ordering(96) 00:27:09.999 fused_ordering(97) 00:27:09.999 fused_ordering(98) 00:27:09.999 fused_ordering(99) 00:27:09.999 fused_ordering(100) 00:27:09.999 fused_ordering(101) 00:27:09.999 fused_ordering(102) 00:27:09.999 fused_ordering(103) 00:27:09.999 fused_ordering(104) 00:27:09.999 fused_ordering(105) 00:27:09.999 fused_ordering(106) 00:27:09.999 fused_ordering(107) 00:27:09.999 fused_ordering(108) 00:27:09.999 fused_ordering(109) 00:27:09.999 fused_ordering(110) 00:27:09.999 fused_ordering(111) 00:27:09.999 fused_ordering(112) 00:27:09.999 fused_ordering(113) 00:27:09.999 fused_ordering(114) 00:27:09.999 fused_ordering(115) 00:27:09.999 fused_ordering(116) 00:27:09.999 fused_ordering(117) 00:27:09.999 fused_ordering(118) 00:27:09.999 fused_ordering(119) 00:27:09.999 fused_ordering(120) 00:27:09.999 fused_ordering(121) 00:27:09.999 fused_ordering(122) 00:27:09.999 fused_ordering(123) 00:27:09.999 fused_ordering(124) 00:27:09.999 fused_ordering(125) 00:27:09.999 fused_ordering(126) 00:27:09.999 fused_ordering(127) 00:27:09.999 fused_ordering(128) 00:27:09.999 fused_ordering(129) 00:27:09.999 fused_ordering(130) 00:27:09.999 fused_ordering(131) 00:27:09.999 fused_ordering(132) 00:27:09.999 fused_ordering(133) 00:27:09.999 fused_ordering(134) 00:27:09.999 fused_ordering(135) 00:27:09.999 fused_ordering(136) 00:27:09.999 fused_ordering(137) 00:27:09.999 fused_ordering(138) 00:27:09.999 fused_ordering(139) 00:27:09.999 fused_ordering(140) 00:27:09.999 fused_ordering(141) 00:27:09.999 fused_ordering(142) 00:27:09.999 fused_ordering(143) 00:27:10.000 fused_ordering(144) 00:27:10.000 fused_ordering(145) 00:27:10.000 fused_ordering(146) 00:27:10.000 fused_ordering(147) 00:27:10.000 fused_ordering(148) 00:27:10.000 fused_ordering(149) 00:27:10.000 fused_ordering(150) 00:27:10.000 fused_ordering(151) 00:27:10.000 fused_ordering(152) 00:27:10.000 fused_ordering(153) 00:27:10.000 fused_ordering(154) 00:27:10.000 fused_ordering(155) 00:27:10.000 fused_ordering(156) 00:27:10.000 fused_ordering(157) 00:27:10.000 fused_ordering(158) 00:27:10.000 fused_ordering(159) 00:27:10.000 fused_ordering(160) 00:27:10.000 fused_ordering(161) 00:27:10.000 fused_ordering(162) 00:27:10.000 fused_ordering(163) 00:27:10.000 fused_ordering(164) 00:27:10.000 fused_ordering(165) 00:27:10.000 fused_ordering(166) 00:27:10.000 fused_ordering(167) 00:27:10.000 fused_ordering(168) 00:27:10.000 fused_ordering(169) 00:27:10.000 fused_ordering(170) 00:27:10.000 fused_ordering(171) 00:27:10.000 fused_ordering(172) 00:27:10.000 fused_ordering(173) 00:27:10.000 fused_ordering(174) 00:27:10.000 fused_ordering(175) 00:27:10.000 fused_ordering(176) 00:27:10.000 fused_ordering(177) 00:27:10.000 fused_ordering(178) 00:27:10.000 fused_ordering(179) 00:27:10.000 fused_ordering(180) 00:27:10.000 fused_ordering(181) 00:27:10.000 fused_ordering(182) 00:27:10.000 fused_ordering(183) 00:27:10.000 fused_ordering(184) 00:27:10.000 fused_ordering(185) 00:27:10.000 fused_ordering(186) 00:27:10.000 fused_ordering(187) 00:27:10.000 fused_ordering(188) 00:27:10.000 fused_ordering(189) 00:27:10.000 fused_ordering(190) 00:27:10.000 fused_ordering(191) 00:27:10.000 fused_ordering(192) 00:27:10.000 fused_ordering(193) 00:27:10.000 fused_ordering(194) 00:27:10.000 fused_ordering(195) 00:27:10.000 fused_ordering(196) 00:27:10.000 fused_ordering(197) 00:27:10.000 fused_ordering(198) 00:27:10.000 fused_ordering(199) 00:27:10.000 fused_ordering(200) 00:27:10.000 fused_ordering(201) 00:27:10.000 fused_ordering(202) 00:27:10.000 fused_ordering(203) 00:27:10.000 fused_ordering(204) 00:27:10.000 fused_ordering(205) 00:27:10.257 fused_ordering(206) 00:27:10.257 fused_ordering(207) 00:27:10.257 fused_ordering(208) 00:27:10.257 fused_ordering(209) 00:27:10.257 fused_ordering(210) 00:27:10.257 fused_ordering(211) 00:27:10.257 fused_ordering(212) 00:27:10.257 fused_ordering(213) 00:27:10.257 fused_ordering(214) 00:27:10.257 fused_ordering(215) 00:27:10.257 fused_ordering(216) 00:27:10.257 fused_ordering(217) 00:27:10.257 fused_ordering(218) 00:27:10.257 fused_ordering(219) 00:27:10.257 fused_ordering(220) 00:27:10.257 fused_ordering(221) 00:27:10.257 fused_ordering(222) 00:27:10.257 fused_ordering(223) 00:27:10.257 fused_ordering(224) 00:27:10.257 fused_ordering(225) 00:27:10.257 fused_ordering(226) 00:27:10.257 fused_ordering(227) 00:27:10.257 fused_ordering(228) 00:27:10.257 fused_ordering(229) 00:27:10.257 fused_ordering(230) 00:27:10.257 fused_ordering(231) 00:27:10.257 fused_ordering(232) 00:27:10.257 fused_ordering(233) 00:27:10.257 fused_ordering(234) 00:27:10.257 fused_ordering(235) 00:27:10.257 fused_ordering(236) 00:27:10.257 fused_ordering(237) 00:27:10.257 fused_ordering(238) 00:27:10.257 fused_ordering(239) 00:27:10.257 fused_ordering(240) 00:27:10.257 fused_ordering(241) 00:27:10.257 fused_ordering(242) 00:27:10.257 fused_ordering(243) 00:27:10.257 fused_ordering(244) 00:27:10.257 fused_ordering(245) 00:27:10.257 fused_ordering(246) 00:27:10.257 fused_ordering(247) 00:27:10.257 fused_ordering(248) 00:27:10.257 fused_ordering(249) 00:27:10.257 fused_ordering(250) 00:27:10.257 fused_ordering(251) 00:27:10.257 fused_ordering(252) 00:27:10.257 fused_ordering(253) 00:27:10.257 fused_ordering(254) 00:27:10.257 fused_ordering(255) 00:27:10.257 fused_ordering(256) 00:27:10.257 fused_ordering(257) 00:27:10.257 fused_ordering(258) 00:27:10.257 fused_ordering(259) 00:27:10.257 fused_ordering(260) 00:27:10.257 fused_ordering(261) 00:27:10.257 fused_ordering(262) 00:27:10.257 fused_ordering(263) 00:27:10.257 fused_ordering(264) 00:27:10.257 fused_ordering(265) 00:27:10.257 fused_ordering(266) 00:27:10.257 fused_ordering(267) 00:27:10.257 fused_ordering(268) 00:27:10.257 fused_ordering(269) 00:27:10.257 fused_ordering(270) 00:27:10.257 fused_ordering(271) 00:27:10.257 fused_ordering(272) 00:27:10.257 fused_ordering(273) 00:27:10.257 fused_ordering(274) 00:27:10.257 fused_ordering(275) 00:27:10.257 fused_ordering(276) 00:27:10.257 fused_ordering(277) 00:27:10.257 fused_ordering(278) 00:27:10.257 fused_ordering(279) 00:27:10.257 fused_ordering(280) 00:27:10.257 fused_ordering(281) 00:27:10.257 fused_ordering(282) 00:27:10.257 fused_ordering(283) 00:27:10.257 fused_ordering(284) 00:27:10.257 fused_ordering(285) 00:27:10.257 fused_ordering(286) 00:27:10.257 fused_ordering(287) 00:27:10.257 fused_ordering(288) 00:27:10.258 fused_ordering(289) 00:27:10.258 fused_ordering(290) 00:27:10.258 fused_ordering(291) 00:27:10.258 fused_ordering(292) 00:27:10.258 fused_ordering(293) 00:27:10.258 fused_ordering(294) 00:27:10.258 fused_ordering(295) 00:27:10.258 fused_ordering(296) 00:27:10.258 fused_ordering(297) 00:27:10.258 fused_ordering(298) 00:27:10.258 fused_ordering(299) 00:27:10.258 fused_ordering(300) 00:27:10.258 fused_ordering(301) 00:27:10.258 fused_ordering(302) 00:27:10.258 fused_ordering(303) 00:27:10.258 fused_ordering(304) 00:27:10.258 fused_ordering(305) 00:27:10.258 fused_ordering(306) 00:27:10.258 fused_ordering(307) 00:27:10.258 fused_ordering(308) 00:27:10.258 fused_ordering(309) 00:27:10.258 fused_ordering(310) 00:27:10.258 fused_ordering(311) 00:27:10.258 fused_ordering(312) 00:27:10.258 fused_ordering(313) 00:27:10.258 fused_ordering(314) 00:27:10.258 fused_ordering(315) 00:27:10.258 fused_ordering(316) 00:27:10.258 fused_ordering(317) 00:27:10.258 fused_ordering(318) 00:27:10.258 fused_ordering(319) 00:27:10.258 fused_ordering(320) 00:27:10.258 fused_ordering(321) 00:27:10.258 fused_ordering(322) 00:27:10.258 fused_ordering(323) 00:27:10.258 fused_ordering(324) 00:27:10.258 fused_ordering(325) 00:27:10.258 fused_ordering(326) 00:27:10.258 fused_ordering(327) 00:27:10.258 fused_ordering(328) 00:27:10.258 fused_ordering(329) 00:27:10.258 fused_ordering(330) 00:27:10.258 fused_ordering(331) 00:27:10.258 fused_ordering(332) 00:27:10.258 fused_ordering(333) 00:27:10.258 fused_ordering(334) 00:27:10.258 fused_ordering(335) 00:27:10.258 fused_ordering(336) 00:27:10.258 fused_ordering(337) 00:27:10.258 fused_ordering(338) 00:27:10.258 fused_ordering(339) 00:27:10.258 fused_ordering(340) 00:27:10.258 fused_ordering(341) 00:27:10.258 fused_ordering(342) 00:27:10.258 fused_ordering(343) 00:27:10.258 fused_ordering(344) 00:27:10.258 fused_ordering(345) 00:27:10.258 fused_ordering(346) 00:27:10.258 fused_ordering(347) 00:27:10.258 fused_ordering(348) 00:27:10.258 fused_ordering(349) 00:27:10.258 fused_ordering(350) 00:27:10.258 fused_ordering(351) 00:27:10.258 fused_ordering(352) 00:27:10.258 fused_ordering(353) 00:27:10.258 fused_ordering(354) 00:27:10.258 fused_ordering(355) 00:27:10.258 fused_ordering(356) 00:27:10.258 fused_ordering(357) 00:27:10.258 fused_ordering(358) 00:27:10.258 fused_ordering(359) 00:27:10.258 fused_ordering(360) 00:27:10.258 fused_ordering(361) 00:27:10.258 fused_ordering(362) 00:27:10.258 fused_ordering(363) 00:27:10.258 fused_ordering(364) 00:27:10.258 fused_ordering(365) 00:27:10.258 fused_ordering(366) 00:27:10.258 fused_ordering(367) 00:27:10.258 fused_ordering(368) 00:27:10.258 fused_ordering(369) 00:27:10.258 fused_ordering(370) 00:27:10.258 fused_ordering(371) 00:27:10.258 fused_ordering(372) 00:27:10.258 fused_ordering(373) 00:27:10.258 fused_ordering(374) 00:27:10.258 fused_ordering(375) 00:27:10.258 fused_ordering(376) 00:27:10.258 fused_ordering(377) 00:27:10.258 fused_ordering(378) 00:27:10.258 fused_ordering(379) 00:27:10.258 fused_ordering(380) 00:27:10.258 fused_ordering(381) 00:27:10.258 fused_ordering(382) 00:27:10.258 fused_ordering(383) 00:27:10.258 fused_ordering(384) 00:27:10.258 fused_ordering(385) 00:27:10.258 fused_ordering(386) 00:27:10.258 fused_ordering(387) 00:27:10.258 fused_ordering(388) 00:27:10.258 fused_ordering(389) 00:27:10.258 fused_ordering(390) 00:27:10.258 fused_ordering(391) 00:27:10.258 fused_ordering(392) 00:27:10.258 fused_ordering(393) 00:27:10.258 fused_ordering(394) 00:27:10.258 fused_ordering(395) 00:27:10.258 fused_ordering(396) 00:27:10.258 fused_ordering(397) 00:27:10.258 fused_ordering(398) 00:27:10.258 fused_ordering(399) 00:27:10.258 fused_ordering(400) 00:27:10.258 fused_ordering(401) 00:27:10.258 fused_ordering(402) 00:27:10.258 fused_ordering(403) 00:27:10.258 fused_ordering(404) 00:27:10.258 fused_ordering(405) 00:27:10.258 fused_ordering(406) 00:27:10.258 fused_ordering(407) 00:27:10.258 fused_ordering(408) 00:27:10.258 fused_ordering(409) 00:27:10.258 fused_ordering(410) 00:27:10.824 fused_ordering(411) 00:27:10.824 fused_ordering(412) 00:27:10.824 fused_ordering(413) 00:27:10.824 fused_ordering(414) 00:27:10.824 fused_ordering(415) 00:27:10.824 fused_ordering(416) 00:27:10.824 fused_ordering(417) 00:27:10.824 fused_ordering(418) 00:27:10.824 fused_ordering(419) 00:27:10.824 fused_ordering(420) 00:27:10.824 fused_ordering(421) 00:27:10.824 fused_ordering(422) 00:27:10.824 fused_ordering(423) 00:27:10.824 fused_ordering(424) 00:27:10.824 fused_ordering(425) 00:27:10.824 fused_ordering(426) 00:27:10.824 fused_ordering(427) 00:27:10.824 fused_ordering(428) 00:27:10.824 fused_ordering(429) 00:27:10.824 fused_ordering(430) 00:27:10.824 fused_ordering(431) 00:27:10.824 fused_ordering(432) 00:27:10.824 fused_ordering(433) 00:27:10.824 fused_ordering(434) 00:27:10.824 fused_ordering(435) 00:27:10.824 fused_ordering(436) 00:27:10.824 fused_ordering(437) 00:27:10.824 fused_ordering(438) 00:27:10.824 fused_ordering(439) 00:27:10.824 fused_ordering(440) 00:27:10.824 fused_ordering(441) 00:27:10.824 fused_ordering(442) 00:27:10.824 fused_ordering(443) 00:27:10.824 fused_ordering(444) 00:27:10.824 fused_ordering(445) 00:27:10.824 fused_ordering(446) 00:27:10.824 fused_ordering(447) 00:27:10.824 fused_ordering(448) 00:27:10.824 fused_ordering(449) 00:27:10.824 fused_ordering(450) 00:27:10.824 fused_ordering(451) 00:27:10.824 fused_ordering(452) 00:27:10.824 fused_ordering(453) 00:27:10.824 fused_ordering(454) 00:27:10.824 fused_ordering(455) 00:27:10.824 fused_ordering(456) 00:27:10.824 fused_ordering(457) 00:27:10.824 fused_ordering(458) 00:27:10.824 fused_ordering(459) 00:27:10.824 fused_ordering(460) 00:27:10.824 fused_ordering(461) 00:27:10.824 fused_ordering(462) 00:27:10.824 fused_ordering(463) 00:27:10.824 fused_ordering(464) 00:27:10.824 fused_ordering(465) 00:27:10.824 fused_ordering(466) 00:27:10.824 fused_ordering(467) 00:27:10.824 fused_ordering(468) 00:27:10.824 fused_ordering(469) 00:27:10.824 fused_ordering(470) 00:27:10.824 fused_ordering(471) 00:27:10.824 fused_ordering(472) 00:27:10.824 fused_ordering(473) 00:27:10.824 fused_ordering(474) 00:27:10.824 fused_ordering(475) 00:27:10.824 fused_ordering(476) 00:27:10.824 fused_ordering(477) 00:27:10.824 fused_ordering(478) 00:27:10.824 fused_ordering(479) 00:27:10.824 fused_ordering(480) 00:27:10.824 fused_ordering(481) 00:27:10.824 fused_ordering(482) 00:27:10.824 fused_ordering(483) 00:27:10.824 fused_ordering(484) 00:27:10.824 fused_ordering(485) 00:27:10.824 fused_ordering(486) 00:27:10.824 fused_ordering(487) 00:27:10.824 fused_ordering(488) 00:27:10.824 fused_ordering(489) 00:27:10.824 fused_ordering(490) 00:27:10.824 fused_ordering(491) 00:27:10.824 fused_ordering(492) 00:27:10.824 fused_ordering(493) 00:27:10.824 fused_ordering(494) 00:27:10.824 fused_ordering(495) 00:27:10.824 fused_ordering(496) 00:27:10.824 fused_ordering(497) 00:27:10.824 fused_ordering(498) 00:27:10.824 fused_ordering(499) 00:27:10.824 fused_ordering(500) 00:27:10.824 fused_ordering(501) 00:27:10.824 fused_ordering(502) 00:27:10.824 fused_ordering(503) 00:27:10.824 fused_ordering(504) 00:27:10.824 fused_ordering(505) 00:27:10.824 fused_ordering(506) 00:27:10.824 fused_ordering(507) 00:27:10.824 fused_ordering(508) 00:27:10.824 fused_ordering(509) 00:27:10.824 fused_ordering(510) 00:27:10.824 fused_ordering(511) 00:27:10.824 fused_ordering(512) 00:27:10.824 fused_ordering(513) 00:27:10.824 fused_ordering(514) 00:27:10.824 fused_ordering(515) 00:27:10.824 fused_ordering(516) 00:27:10.824 fused_ordering(517) 00:27:10.824 fused_ordering(518) 00:27:10.824 fused_ordering(519) 00:27:10.824 fused_ordering(520) 00:27:10.824 fused_ordering(521) 00:27:10.824 fused_ordering(522) 00:27:10.824 fused_ordering(523) 00:27:10.824 fused_ordering(524) 00:27:10.824 fused_ordering(525) 00:27:10.824 fused_ordering(526) 00:27:10.824 fused_ordering(527) 00:27:10.824 fused_ordering(528) 00:27:10.824 fused_ordering(529) 00:27:10.824 fused_ordering(530) 00:27:10.824 fused_ordering(531) 00:27:10.824 fused_ordering(532) 00:27:10.824 fused_ordering(533) 00:27:10.824 fused_ordering(534) 00:27:10.824 fused_ordering(535) 00:27:10.824 fused_ordering(536) 00:27:10.824 fused_ordering(537) 00:27:10.824 fused_ordering(538) 00:27:10.824 fused_ordering(539) 00:27:10.824 fused_ordering(540) 00:27:10.824 fused_ordering(541) 00:27:10.824 fused_ordering(542) 00:27:10.824 fused_ordering(543) 00:27:10.824 fused_ordering(544) 00:27:10.824 fused_ordering(545) 00:27:10.824 fused_ordering(546) 00:27:10.824 fused_ordering(547) 00:27:10.824 fused_ordering(548) 00:27:10.824 fused_ordering(549) 00:27:10.824 fused_ordering(550) 00:27:10.824 fused_ordering(551) 00:27:10.824 fused_ordering(552) 00:27:10.824 fused_ordering(553) 00:27:10.824 fused_ordering(554) 00:27:10.824 fused_ordering(555) 00:27:10.824 fused_ordering(556) 00:27:10.824 fused_ordering(557) 00:27:10.824 fused_ordering(558) 00:27:10.824 fused_ordering(559) 00:27:10.824 fused_ordering(560) 00:27:10.824 fused_ordering(561) 00:27:10.824 fused_ordering(562) 00:27:10.824 fused_ordering(563) 00:27:10.824 fused_ordering(564) 00:27:10.824 fused_ordering(565) 00:27:10.824 fused_ordering(566) 00:27:10.824 fused_ordering(567) 00:27:10.824 fused_ordering(568) 00:27:10.824 fused_ordering(569) 00:27:10.824 fused_ordering(570) 00:27:10.824 fused_ordering(571) 00:27:10.824 fused_ordering(572) 00:27:10.824 fused_ordering(573) 00:27:10.825 fused_ordering(574) 00:27:10.825 fused_ordering(575) 00:27:10.825 fused_ordering(576) 00:27:10.825 fused_ordering(577) 00:27:10.825 fused_ordering(578) 00:27:10.825 fused_ordering(579) 00:27:10.825 fused_ordering(580) 00:27:10.825 fused_ordering(581) 00:27:10.825 fused_ordering(582) 00:27:10.825 fused_ordering(583) 00:27:10.825 fused_ordering(584) 00:27:10.825 fused_ordering(585) 00:27:10.825 fused_ordering(586) 00:27:10.825 fused_ordering(587) 00:27:10.825 fused_ordering(588) 00:27:10.825 fused_ordering(589) 00:27:10.825 fused_ordering(590) 00:27:10.825 fused_ordering(591) 00:27:10.825 fused_ordering(592) 00:27:10.825 fused_ordering(593) 00:27:10.825 fused_ordering(594) 00:27:10.825 fused_ordering(595) 00:27:10.825 fused_ordering(596) 00:27:10.825 fused_ordering(597) 00:27:10.825 fused_ordering(598) 00:27:10.825 fused_ordering(599) 00:27:10.825 fused_ordering(600) 00:27:10.825 fused_ordering(601) 00:27:10.825 fused_ordering(602) 00:27:10.825 fused_ordering(603) 00:27:10.825 fused_ordering(604) 00:27:10.825 fused_ordering(605) 00:27:10.825 fused_ordering(606) 00:27:10.825 fused_ordering(607) 00:27:10.825 fused_ordering(608) 00:27:10.825 fused_ordering(609) 00:27:10.825 fused_ordering(610) 00:27:10.825 fused_ordering(611) 00:27:10.825 fused_ordering(612) 00:27:10.825 fused_ordering(613) 00:27:10.825 fused_ordering(614) 00:27:10.825 fused_ordering(615) 00:27:11.083 fused_ordering(616) 00:27:11.083 fused_ordering(617) 00:27:11.083 fused_ordering(618) 00:27:11.083 fused_ordering(619) 00:27:11.083 fused_ordering(620) 00:27:11.083 fused_ordering(621) 00:27:11.083 fused_ordering(622) 00:27:11.083 fused_ordering(623) 00:27:11.083 fused_ordering(624) 00:27:11.083 fused_ordering(625) 00:27:11.083 fused_ordering(626) 00:27:11.083 fused_ordering(627) 00:27:11.083 fused_ordering(628) 00:27:11.083 fused_ordering(629) 00:27:11.083 fused_ordering(630) 00:27:11.083 fused_ordering(631) 00:27:11.083 fused_ordering(632) 00:27:11.083 fused_ordering(633) 00:27:11.083 fused_ordering(634) 00:27:11.083 fused_ordering(635) 00:27:11.083 fused_ordering(636) 00:27:11.083 fused_ordering(637) 00:27:11.083 fused_ordering(638) 00:27:11.083 fused_ordering(639) 00:27:11.083 fused_ordering(640) 00:27:11.083 fused_ordering(641) 00:27:11.083 fused_ordering(642) 00:27:11.083 fused_ordering(643) 00:27:11.083 fused_ordering(644) 00:27:11.083 fused_ordering(645) 00:27:11.083 fused_ordering(646) 00:27:11.083 fused_ordering(647) 00:27:11.083 fused_ordering(648) 00:27:11.083 fused_ordering(649) 00:27:11.083 fused_ordering(650) 00:27:11.083 fused_ordering(651) 00:27:11.083 fused_ordering(652) 00:27:11.083 fused_ordering(653) 00:27:11.083 fused_ordering(654) 00:27:11.083 fused_ordering(655) 00:27:11.083 fused_ordering(656) 00:27:11.083 fused_ordering(657) 00:27:11.083 fused_ordering(658) 00:27:11.083 fused_ordering(659) 00:27:11.083 fused_ordering(660) 00:27:11.083 fused_ordering(661) 00:27:11.083 fused_ordering(662) 00:27:11.083 fused_ordering(663) 00:27:11.083 fused_ordering(664) 00:27:11.083 fused_ordering(665) 00:27:11.083 fused_ordering(666) 00:27:11.083 fused_ordering(667) 00:27:11.083 fused_ordering(668) 00:27:11.083 fused_ordering(669) 00:27:11.083 fused_ordering(670) 00:27:11.083 fused_ordering(671) 00:27:11.083 fused_ordering(672) 00:27:11.083 fused_ordering(673) 00:27:11.083 fused_ordering(674) 00:27:11.083 fused_ordering(675) 00:27:11.083 fused_ordering(676) 00:27:11.083 fused_ordering(677) 00:27:11.083 fused_ordering(678) 00:27:11.083 fused_ordering(679) 00:27:11.083 fused_ordering(680) 00:27:11.083 fused_ordering(681) 00:27:11.083 fused_ordering(682) 00:27:11.083 fused_ordering(683) 00:27:11.083 fused_ordering(684) 00:27:11.083 fused_ordering(685) 00:27:11.083 fused_ordering(686) 00:27:11.083 fused_ordering(687) 00:27:11.083 fused_ordering(688) 00:27:11.083 fused_ordering(689) 00:27:11.083 fused_ordering(690) 00:27:11.083 fused_ordering(691) 00:27:11.083 fused_ordering(692) 00:27:11.083 fused_ordering(693) 00:27:11.083 fused_ordering(694) 00:27:11.083 fused_ordering(695) 00:27:11.083 fused_ordering(696) 00:27:11.083 fused_ordering(697) 00:27:11.083 fused_ordering(698) 00:27:11.083 fused_ordering(699) 00:27:11.083 fused_ordering(700) 00:27:11.083 fused_ordering(701) 00:27:11.083 fused_ordering(702) 00:27:11.083 fused_ordering(703) 00:27:11.083 fused_ordering(704) 00:27:11.083 fused_ordering(705) 00:27:11.083 fused_ordering(706) 00:27:11.083 fused_ordering(707) 00:27:11.083 fused_ordering(708) 00:27:11.083 fused_ordering(709) 00:27:11.083 fused_ordering(710) 00:27:11.083 fused_ordering(711) 00:27:11.083 fused_ordering(712) 00:27:11.083 fused_ordering(713) 00:27:11.083 fused_ordering(714) 00:27:11.083 fused_ordering(715) 00:27:11.083 fused_ordering(716) 00:27:11.083 fused_ordering(717) 00:27:11.083 fused_ordering(718) 00:27:11.083 fused_ordering(719) 00:27:11.083 fused_ordering(720) 00:27:11.083 fused_ordering(721) 00:27:11.083 fused_ordering(722) 00:27:11.083 fused_ordering(723) 00:27:11.083 fused_ordering(724) 00:27:11.083 fused_ordering(725) 00:27:11.083 fused_ordering(726) 00:27:11.083 fused_ordering(727) 00:27:11.083 fused_ordering(728) 00:27:11.083 fused_ordering(729) 00:27:11.083 fused_ordering(730) 00:27:11.083 fused_ordering(731) 00:27:11.083 fused_ordering(732) 00:27:11.083 fused_ordering(733) 00:27:11.083 fused_ordering(734) 00:27:11.083 fused_ordering(735) 00:27:11.083 fused_ordering(736) 00:27:11.083 fused_ordering(737) 00:27:11.083 fused_ordering(738) 00:27:11.083 fused_ordering(739) 00:27:11.083 fused_ordering(740) 00:27:11.083 fused_ordering(741) 00:27:11.083 fused_ordering(742) 00:27:11.083 fused_ordering(743) 00:27:11.083 fused_ordering(744) 00:27:11.083 fused_ordering(745) 00:27:11.083 fused_ordering(746) 00:27:11.083 fused_ordering(747) 00:27:11.083 fused_ordering(748) 00:27:11.083 fused_ordering(749) 00:27:11.083 fused_ordering(750) 00:27:11.083 fused_ordering(751) 00:27:11.083 fused_ordering(752) 00:27:11.083 fused_ordering(753) 00:27:11.083 fused_ordering(754) 00:27:11.083 fused_ordering(755) 00:27:11.083 fused_ordering(756) 00:27:11.083 fused_ordering(757) 00:27:11.083 fused_ordering(758) 00:27:11.083 fused_ordering(759) 00:27:11.083 fused_ordering(760) 00:27:11.083 fused_ordering(761) 00:27:11.083 fused_ordering(762) 00:27:11.083 fused_ordering(763) 00:27:11.083 fused_ordering(764) 00:27:11.083 fused_ordering(765) 00:27:11.083 fused_ordering(766) 00:27:11.083 fused_ordering(767) 00:27:11.083 fused_ordering(768) 00:27:11.083 fused_ordering(769) 00:27:11.083 fused_ordering(770) 00:27:11.083 fused_ordering(771) 00:27:11.083 fused_ordering(772) 00:27:11.083 fused_ordering(773) 00:27:11.083 fused_ordering(774) 00:27:11.083 fused_ordering(775) 00:27:11.083 fused_ordering(776) 00:27:11.083 fused_ordering(777) 00:27:11.083 fused_ordering(778) 00:27:11.083 fused_ordering(779) 00:27:11.083 fused_ordering(780) 00:27:11.083 fused_ordering(781) 00:27:11.083 fused_ordering(782) 00:27:11.083 fused_ordering(783) 00:27:11.083 fused_ordering(784) 00:27:11.083 fused_ordering(785) 00:27:11.083 fused_ordering(786) 00:27:11.083 fused_ordering(787) 00:27:11.083 fused_ordering(788) 00:27:11.083 fused_ordering(789) 00:27:11.083 fused_ordering(790) 00:27:11.083 fused_ordering(791) 00:27:11.083 fused_ordering(792) 00:27:11.083 fused_ordering(793) 00:27:11.083 fused_ordering(794) 00:27:11.083 fused_ordering(795) 00:27:11.083 fused_ordering(796) 00:27:11.083 fused_ordering(797) 00:27:11.083 fused_ordering(798) 00:27:11.083 fused_ordering(799) 00:27:11.083 fused_ordering(800) 00:27:11.083 fused_ordering(801) 00:27:11.083 fused_ordering(802) 00:27:11.083 fused_ordering(803) 00:27:11.083 fused_ordering(804) 00:27:11.083 fused_ordering(805) 00:27:11.083 fused_ordering(806) 00:27:11.083 fused_ordering(807) 00:27:11.083 fused_ordering(808) 00:27:11.083 fused_ordering(809) 00:27:11.083 fused_ordering(810) 00:27:11.083 fused_ordering(811) 00:27:11.083 fused_ordering(812) 00:27:11.083 fused_ordering(813) 00:27:11.083 fused_ordering(814) 00:27:11.083 fused_ordering(815) 00:27:11.083 fused_ordering(816) 00:27:11.083 fused_ordering(817) 00:27:11.083 fused_ordering(818) 00:27:11.084 fused_ordering(819) 00:27:11.084 fused_ordering(820) 00:27:11.649 fused_ordering(821) 00:27:11.649 fused_ordering(822) 00:27:11.649 fused_ordering(823) 00:27:11.649 fused_ordering(824) 00:27:11.649 fused_ordering(825) 00:27:11.649 fused_ordering(826) 00:27:11.649 fused_ordering(827) 00:27:11.649 fused_ordering(828) 00:27:11.649 fused_ordering(829) 00:27:11.649 fused_ordering(830) 00:27:11.649 fused_ordering(831) 00:27:11.649 fused_ordering(832) 00:27:11.649 fused_ordering(833) 00:27:11.649 fused_ordering(834) 00:27:11.649 fused_ordering(835) 00:27:11.649 fused_ordering(836) 00:27:11.649 fused_ordering(837) 00:27:11.649 fused_ordering(838) 00:27:11.649 fused_ordering(839) 00:27:11.649 fused_ordering(840) 00:27:11.649 fused_ordering(841) 00:27:11.649 fused_ordering(842) 00:27:11.649 fused_ordering(843) 00:27:11.649 fused_ordering(844) 00:27:11.649 fused_ordering(845) 00:27:11.649 fused_ordering(846) 00:27:11.649 fused_ordering(847) 00:27:11.649 fused_ordering(848) 00:27:11.649 fused_ordering(849) 00:27:11.649 fused_ordering(850) 00:27:11.649 fused_ordering(851) 00:27:11.649 fused_ordering(852) 00:27:11.649 fused_ordering(853) 00:27:11.649 fused_ordering(854) 00:27:11.649 fused_ordering(855) 00:27:11.649 fused_ordering(856) 00:27:11.649 fused_ordering(857) 00:27:11.649 fused_ordering(858) 00:27:11.649 fused_ordering(859) 00:27:11.649 fused_ordering(860) 00:27:11.649 fused_ordering(861) 00:27:11.649 fused_ordering(862) 00:27:11.649 fused_ordering(863) 00:27:11.649 fused_ordering(864) 00:27:11.649 fused_ordering(865) 00:27:11.649 fused_ordering(866) 00:27:11.649 fused_ordering(867) 00:27:11.649 fused_ordering(868) 00:27:11.649 fused_ordering(869) 00:27:11.649 fused_ordering(870) 00:27:11.649 fused_ordering(871) 00:27:11.649 fused_ordering(872) 00:27:11.649 fused_ordering(873) 00:27:11.649 fused_ordering(874) 00:27:11.649 fused_ordering(875) 00:27:11.649 fused_ordering(876) 00:27:11.649 fused_ordering(877) 00:27:11.649 fused_ordering(878) 00:27:11.649 fused_ordering(879) 00:27:11.649 fused_ordering(880) 00:27:11.649 fused_ordering(881) 00:27:11.649 fused_ordering(882) 00:27:11.649 fused_ordering(883) 00:27:11.649 fused_ordering(884) 00:27:11.649 fused_ordering(885) 00:27:11.649 fused_ordering(886) 00:27:11.649 fused_ordering(887) 00:27:11.649 fused_ordering(888) 00:27:11.649 fused_ordering(889) 00:27:11.649 fused_ordering(890) 00:27:11.649 fused_ordering(891) 00:27:11.649 fused_ordering(892) 00:27:11.649 fused_ordering(893) 00:27:11.649 fused_ordering(894) 00:27:11.649 fused_ordering(895) 00:27:11.649 fused_ordering(896) 00:27:11.649 fused_ordering(897) 00:27:11.649 fused_ordering(898) 00:27:11.649 fused_ordering(899) 00:27:11.649 fused_ordering(900) 00:27:11.649 fused_ordering(901) 00:27:11.649 fused_ordering(902) 00:27:11.649 fused_ordering(903) 00:27:11.649 fused_ordering(904) 00:27:11.649 fused_ordering(905) 00:27:11.649 fused_ordering(906) 00:27:11.649 fused_ordering(907) 00:27:11.649 fused_ordering(908) 00:27:11.649 fused_ordering(909) 00:27:11.649 fused_ordering(910) 00:27:11.649 fused_ordering(911) 00:27:11.649 fused_ordering(912) 00:27:11.649 fused_ordering(913) 00:27:11.649 fused_ordering(914) 00:27:11.649 fused_ordering(915) 00:27:11.649 fused_ordering(916) 00:27:11.649 fused_ordering(917) 00:27:11.649 fused_ordering(918) 00:27:11.649 fused_ordering(919) 00:27:11.649 fused_ordering(920) 00:27:11.649 fused_ordering(921) 00:27:11.649 fused_ordering(922) 00:27:11.649 fused_ordering(923) 00:27:11.649 fused_ordering(924) 00:27:11.649 fused_ordering(925) 00:27:11.649 fused_ordering(926) 00:27:11.649 fused_ordering(927) 00:27:11.649 fused_ordering(928) 00:27:11.649 fused_ordering(929) 00:27:11.649 fused_ordering(930) 00:27:11.649 fused_ordering(931) 00:27:11.649 fused_ordering(932) 00:27:11.650 fused_ordering(933) 00:27:11.650 fused_ordering(934) 00:27:11.650 fused_ordering(935) 00:27:11.650 fused_ordering(936) 00:27:11.650 fused_ordering(937) 00:27:11.650 fused_ordering(938) 00:27:11.650 fused_ordering(939) 00:27:11.650 fused_ordering(940) 00:27:11.650 fused_ordering(941) 00:27:11.650 fused_ordering(942) 00:27:11.650 fused_ordering(943) 00:27:11.650 fused_ordering(944) 00:27:11.650 fused_ordering(945) 00:27:11.650 fused_ordering(946) 00:27:11.650 fused_ordering(947) 00:27:11.650 fused_ordering(948) 00:27:11.650 fused_ordering(949) 00:27:11.650 fused_ordering(950) 00:27:11.650 fused_ordering(951) 00:27:11.650 fused_ordering(952) 00:27:11.650 fused_ordering(953) 00:27:11.650 fused_ordering(954) 00:27:11.650 fused_ordering(955) 00:27:11.650 fused_ordering(956) 00:27:11.650 fused_ordering(957) 00:27:11.650 fused_ordering(958) 00:27:11.650 fused_ordering(959) 00:27:11.650 fused_ordering(960) 00:27:11.650 fused_ordering(961) 00:27:11.650 fused_ordering(962) 00:27:11.650 fused_ordering(963) 00:27:11.650 fused_ordering(964) 00:27:11.650 fused_ordering(965) 00:27:11.650 fused_ordering(966) 00:27:11.650 fused_ordering(967) 00:27:11.650 fused_ordering(968) 00:27:11.650 fused_ordering(969) 00:27:11.650 fused_ordering(970) 00:27:11.650 fused_ordering(971) 00:27:11.650 fused_ordering(972) 00:27:11.650 fused_ordering(973) 00:27:11.650 fused_ordering(974) 00:27:11.650 fused_ordering(975) 00:27:11.650 fused_ordering(976) 00:27:11.650 fused_ordering(977) 00:27:11.650 fused_ordering(978) 00:27:11.650 fused_ordering(979) 00:27:11.650 fused_ordering(980) 00:27:11.650 fused_ordering(981) 00:27:11.650 fused_ordering(982) 00:27:11.650 fused_ordering(983) 00:27:11.650 fused_ordering(984) 00:27:11.650 fused_ordering(985) 00:27:11.650 fused_ordering(986) 00:27:11.650 fused_ordering(987) 00:27:11.650 fused_ordering(988) 00:27:11.650 fused_ordering(989) 00:27:11.650 fused_ordering(990) 00:27:11.650 fused_ordering(991) 00:27:11.650 fused_ordering(992) 00:27:11.650 fused_ordering(993) 00:27:11.650 fused_ordering(994) 00:27:11.650 fused_ordering(995) 00:27:11.650 fused_ordering(996) 00:27:11.650 fused_ordering(997) 00:27:11.650 fused_ordering(998) 00:27:11.650 fused_ordering(999) 00:27:11.650 fused_ordering(1000) 00:27:11.650 fused_ordering(1001) 00:27:11.650 fused_ordering(1002) 00:27:11.650 fused_ordering(1003) 00:27:11.650 fused_ordering(1004) 00:27:11.650 fused_ordering(1005) 00:27:11.650 fused_ordering(1006) 00:27:11.650 fused_ordering(1007) 00:27:11.650 fused_ordering(1008) 00:27:11.650 fused_ordering(1009) 00:27:11.650 fused_ordering(1010) 00:27:11.650 fused_ordering(1011) 00:27:11.650 fused_ordering(1012) 00:27:11.650 fused_ordering(1013) 00:27:11.650 fused_ordering(1014) 00:27:11.650 fused_ordering(1015) 00:27:11.650 fused_ordering(1016) 00:27:11.650 fused_ordering(1017) 00:27:11.650 fused_ordering(1018) 00:27:11.650 fused_ordering(1019) 00:27:11.650 fused_ordering(1020) 00:27:11.650 fused_ordering(1021) 00:27:11.650 fused_ordering(1022) 00:27:11.650 fused_ordering(1023) 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:11.650 rmmod nvme_tcp 00:27:11.650 rmmod nvme_fabrics 00:27:11.650 rmmod nvme_keyring 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 612146 ']' 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 612146 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 612146 ']' 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 612146 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 612146 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 612146' 00:27:11.650 killing process with pid 612146 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 612146 00:27:11.650 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 612146 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.908 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.441 00:27:14.441 real 0m10.546s 00:27:14.441 user 0m5.035s 00:27:14.441 sys 0m5.738s 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:14.441 ************************************ 00:27:14.441 END TEST nvmf_fused_ordering 00:27:14.441 ************************************ 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:14.441 ************************************ 00:27:14.441 START TEST nvmf_ns_masking 00:27:14.441 ************************************ 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:27:14.441 * Looking for test storage... 00:27:14.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.441 --rc genhtml_branch_coverage=1 00:27:14.441 --rc genhtml_function_coverage=1 00:27:14.441 --rc genhtml_legend=1 00:27:14.441 --rc geninfo_all_blocks=1 00:27:14.441 --rc geninfo_unexecuted_blocks=1 00:27:14.441 00:27:14.441 ' 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.441 --rc genhtml_branch_coverage=1 00:27:14.441 --rc genhtml_function_coverage=1 00:27:14.441 --rc genhtml_legend=1 00:27:14.441 --rc geninfo_all_blocks=1 00:27:14.441 --rc geninfo_unexecuted_blocks=1 00:27:14.441 00:27:14.441 ' 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.441 --rc genhtml_branch_coverage=1 00:27:14.441 --rc genhtml_function_coverage=1 00:27:14.441 --rc genhtml_legend=1 00:27:14.441 --rc geninfo_all_blocks=1 00:27:14.441 --rc geninfo_unexecuted_blocks=1 00:27:14.441 00:27:14.441 ' 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.441 --rc genhtml_branch_coverage=1 00:27:14.441 --rc genhtml_function_coverage=1 00:27:14.441 --rc genhtml_legend=1 00:27:14.441 --rc geninfo_all_blocks=1 00:27:14.441 --rc geninfo_unexecuted_blocks=1 00:27:14.441 00:27:14.441 ' 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.441 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:14.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=bf37b00f-87d6-494e-a726-a9ede78d163e 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=583916d8-1200-4fdc-ab99-ada34f87cc60 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b3722354-844e-4645-92a7-e041b556520e 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.442 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:19.713 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:19.713 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:19.713 Found net devices under 0000:86:00.0: cvl_0_0 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.713 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:19.714 Found net devices under 0000:86:00.1: cvl_0_1 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:19.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:27:19.714 00:27:19.714 --- 10.0.0.2 ping statistics --- 00:27:19.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.714 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:27:19.714 00:27:19.714 --- 10.0.0.1 ping statistics --- 00:27:19.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.714 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=615928 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 615928 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 615928 ']' 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.714 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:19.714 [2024-12-09 10:38:20.833138] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:27:19.714 [2024-12-09 10:38:20.833190] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.973 [2024-12-09 10:38:20.904910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.973 [2024-12-09 10:38:20.945848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.973 [2024-12-09 10:38:20.945885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.973 [2024-12-09 10:38:20.945894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.973 [2024-12-09 10:38:20.945900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.973 [2024-12-09 10:38:20.945905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.973 [2024-12-09 10:38:20.946465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.973 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:19.973 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:27:19.973 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:19.973 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:19.973 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:19.973 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.973 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:20.231 [2024-12-09 10:38:21.255708] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.231 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:27:20.231 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:27:20.231 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:20.490 Malloc1 00:27:20.490 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:27:20.749 Malloc2 00:27:20.749 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:20.749 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:27:21.008 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.265 [2024-12-09 10:38:22.233989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.265 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:27:21.265 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b3722354-844e-4645-92a7-e041b556520e -a 10.0.0.2 -s 4420 -i 4 00:27:21.265 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:27:21.265 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:27:21.265 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:21.265 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:21.265 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:23.791 [ 0]:0x1 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87bcd94985c848c090e9c5bf07521d29 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87bcd94985c848c090e9c5bf07521d29 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:23.791 [ 0]:0x1 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87bcd94985c848c090e9c5bf07521d29 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87bcd94985c848c090e9c5bf07521d29 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:23.791 [ 1]:0x2 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0471f17266fb48df8586b206b66d4fe4 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0471f17266fb48df8586b206b66d4fe4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:27:23.791 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:23.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:23.792 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.058 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:27:24.321 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:27:24.321 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b3722354-844e-4645-92a7-e041b556520e -a 10.0.0.2 -s 4420 -i 4 00:27:24.321 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:27:24.321 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:27:24.321 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:24.321 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:27:24.321 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:27:24.321 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:26.849 [ 0]:0x2 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0471f17266fb48df8586b206b66d4fe4 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0471f17266fb48df8586b206b66d4fe4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:26.849 [ 0]:0x1 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87bcd94985c848c090e9c5bf07521d29 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87bcd94985c848c090e9c5bf07521d29 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:26.849 [ 1]:0x2 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:26.849 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0471f17266fb48df8586b206b66d4fe4 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0471f17266fb48df8586b206b66d4fe4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:27.106 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:27:27.107 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:27.107 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:27.107 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:27.107 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:27.107 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:27.107 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:27:27.107 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:27.107 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:27.107 [ 0]:0x2 00:27:27.107 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:27.107 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:27.364 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0471f17266fb48df8586b206b66d4fe4 00:27:27.364 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0471f17266fb48df8586b206b66d4fe4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:27.364 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:27:27.364 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:27.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:27.364 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:27:27.623 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:27:27.623 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b3722354-844e-4645-92a7-e041b556520e -a 10.0.0.2 -s 4420 -i 4 00:27:27.623 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:27:27.623 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:27:27.623 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:27.623 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:27:27.623 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:27:27.623 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:30.146 [ 0]:0x1 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87bcd94985c848c090e9c5bf07521d29 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87bcd94985c848c090e9c5bf07521d29 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:30.146 [ 1]:0x2 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0471f17266fb48df8586b206b66d4fe4 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0471f17266fb48df8586b206b66d4fe4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:30.146 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:27:30.146 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:27:30.146 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:30.146 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:27:30.146 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:30.147 [ 0]:0x2 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0471f17266fb48df8586b206b66d4fe4 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0471f17266fb48df8586b206b66d4fe4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:27:30.147 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:27:30.404 [2024-12-09 10:38:31.328038] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:27:30.404 request: 00:27:30.404 { 00:27:30.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.404 "nsid": 2, 00:27:30.404 "host": "nqn.2016-06.io.spdk:host1", 00:27:30.404 "method": "nvmf_ns_remove_host", 00:27:30.404 "req_id": 1 00:27:30.404 } 00:27:30.404 Got JSON-RPC error response 00:27:30.404 response: 00:27:30.404 { 00:27:30.404 "code": -32602, 00:27:30.404 "message": "Invalid parameters" 00:27:30.404 } 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:30.404 [ 0]:0x2 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0471f17266fb48df8586b206b66d4fe4 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0471f17266fb48df8586b206b66d4fe4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:30.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=617917 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 617917 /var/tmp/host.sock 00:27:30.404 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:27:30.405 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 617917 ']' 00:27:30.405 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:27:30.405 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.405 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:27:30.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:27:30.405 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.405 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:30.405 [2024-12-09 10:38:31.563296] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:27:30.405 [2024-12-09 10:38:31.563342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617917 ] 00:27:30.662 [2024-12-09 10:38:31.626836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.662 [2024-12-09 10:38:31.667755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.920 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.920 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:27:30.920 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.920 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:31.177 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid bf37b00f-87d6-494e-a726-a9ede78d163e 00:27:31.177 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:27:31.177 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BF37B00F87D6494EA726A9EDE78D163E -i 00:27:31.435 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 583916d8-1200-4fdc-ab99-ada34f87cc60 00:27:31.435 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:27:31.435 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 583916D812004FDCAB99ADA34F87CC60 -i 00:27:31.693 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:27:31.693 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:27:31.951 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:27:31.951 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:27:32.208 nvme0n1 00:27:32.208 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:27:32.208 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:27:32.774 nvme1n2 00:27:32.774 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:27:32.774 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:27:32.774 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:27:32.774 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:27:32.774 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:27:32.774 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:27:32.774 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:27:32.774 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:27:32.774 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:27:33.032 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ bf37b00f-87d6-494e-a726-a9ede78d163e == \b\f\3\7\b\0\0\f\-\8\7\d\6\-\4\9\4\e\-\a\7\2\6\-\a\9\e\d\e\7\8\d\1\6\3\e ]] 00:27:33.032 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:27:33.032 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:27:33.032 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:27:33.290 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 583916d8-1200-4fdc-ab99-ada34f87cc60 == \5\8\3\9\1\6\d\8\-\1\2\0\0\-\4\f\d\c\-\a\b\9\9\-\a\d\a\3\4\f\8\7\c\c\6\0 ]] 00:27:33.290 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.549 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:33.549 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid bf37b00f-87d6-494e-a726-a9ede78d163e 00:27:33.549 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:27:33.549 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g BF37B00F87D6494EA726A9EDE78D163E 00:27:33.549 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:33.549 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g BF37B00F87D6494EA726A9EDE78D163E 00:27:33.549 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:33.549 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:33.549 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:33.550 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:33.550 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:33.550 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:33.550 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:33.550 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:27:33.550 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g BF37B00F87D6494EA726A9EDE78D163E 00:27:33.808 [2024-12-09 10:38:34.853879] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:27:33.809 [2024-12-09 10:38:34.853910] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:27:33.809 [2024-12-09 10:38:34.853917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:33.809 request: 00:27:33.809 { 00:27:33.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.809 "namespace": { 00:27:33.809 "bdev_name": "invalid", 00:27:33.809 "nsid": 1, 00:27:33.809 "nguid": "BF37B00F87D6494EA726A9EDE78D163E", 00:27:33.809 "no_auto_visible": false, 00:27:33.809 "hide_metadata": false 00:27:33.809 }, 00:27:33.809 "method": "nvmf_subsystem_add_ns", 00:27:33.809 "req_id": 1 00:27:33.809 } 00:27:33.809 Got JSON-RPC error response 00:27:33.809 response: 00:27:33.809 { 00:27:33.809 "code": -32602, 00:27:33.809 "message": "Invalid parameters" 00:27:33.809 } 00:27:33.809 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:33.809 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:33.809 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:33.809 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:33.809 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid bf37b00f-87d6-494e-a726-a9ede78d163e 00:27:33.809 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:27:33.809 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BF37B00F87D6494EA726A9EDE78D163E -i 00:27:34.066 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:27:35.962 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:27:35.962 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:27:35.962 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 617917 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 617917 ']' 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 617917 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 617917 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 617917' 00:27:36.220 killing process with pid 617917 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 617917 00:27:36.220 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 617917 00:27:36.784 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.785 rmmod nvme_tcp 00:27:36.785 rmmod nvme_fabrics 00:27:36.785 rmmod nvme_keyring 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 615928 ']' 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 615928 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 615928 ']' 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 615928 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.785 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 615928 00:27:37.042 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:37.042 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:37.042 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 615928' 00:27:37.042 killing process with pid 615928 00:27:37.042 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 615928 00:27:37.042 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 615928 00:27:37.042 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:37.042 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:37.042 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:37.042 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:27:37.042 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:27:37.042 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:37.042 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:27:37.299 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:37.299 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:37.299 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.299 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.299 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.197 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:39.197 00:27:39.197 real 0m25.164s 00:27:39.197 user 0m30.431s 00:27:39.197 sys 0m6.574s 00:27:39.197 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:39.197 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:39.197 ************************************ 00:27:39.197 END TEST nvmf_ns_masking 00:27:39.197 ************************************ 00:27:39.197 10:38:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:27:39.197 10:38:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:27:39.197 10:38:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:39.197 10:38:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:39.197 10:38:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:39.197 ************************************ 00:27:39.197 START TEST nvmf_nvme_cli 00:27:39.197 ************************************ 00:27:39.197 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:27:39.455 * Looking for test storage... 00:27:39.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.455 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:39.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.456 --rc genhtml_branch_coverage=1 00:27:39.456 --rc genhtml_function_coverage=1 00:27:39.456 --rc genhtml_legend=1 00:27:39.456 --rc geninfo_all_blocks=1 00:27:39.456 --rc geninfo_unexecuted_blocks=1 00:27:39.456 00:27:39.456 ' 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:39.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.456 --rc genhtml_branch_coverage=1 00:27:39.456 --rc genhtml_function_coverage=1 00:27:39.456 --rc genhtml_legend=1 00:27:39.456 --rc geninfo_all_blocks=1 00:27:39.456 --rc geninfo_unexecuted_blocks=1 00:27:39.456 00:27:39.456 ' 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:39.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.456 --rc genhtml_branch_coverage=1 00:27:39.456 --rc genhtml_function_coverage=1 00:27:39.456 --rc genhtml_legend=1 00:27:39.456 --rc geninfo_all_blocks=1 00:27:39.456 --rc geninfo_unexecuted_blocks=1 00:27:39.456 00:27:39.456 ' 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:39.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.456 --rc genhtml_branch_coverage=1 00:27:39.456 --rc genhtml_function_coverage=1 00:27:39.456 --rc genhtml_legend=1 00:27:39.456 --rc geninfo_all_blocks=1 00:27:39.456 --rc geninfo_unexecuted_blocks=1 00:27:39.456 00:27:39.456 ' 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:39.456 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.457 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:44.813 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:44.813 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:44.813 Found net devices under 0000:86:00.0: cvl_0_0 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:44.813 Found net devices under 0000:86:00.1: cvl_0_1 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.813 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:44.814 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.072 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.072 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.072 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.072 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:27:45.072 00:27:45.072 --- 10.0.0.2 ping statistics --- 00:27:45.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.072 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:27:45.072 00:27:45.072 --- 10.0.0.1 ping statistics --- 00:27:45.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.072 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=622416 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 622416 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 622416 ']' 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.072 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.072 [2024-12-09 10:38:46.105306] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:27:45.072 [2024-12-09 10:38:46.105352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.072 [2024-12-09 10:38:46.173357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.072 [2024-12-09 10:38:46.218444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.072 [2024-12-09 10:38:46.218476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.072 [2024-12-09 10:38:46.218484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.072 [2024-12-09 10:38:46.218490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.072 [2024-12-09 10:38:46.218496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.072 [2024-12-09 10:38:46.220072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.072 [2024-12-09 10:38:46.220172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.072 [2024-12-09 10:38:46.220274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.072 [2024-12-09 10:38:46.220275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.330 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.331 [2024-12-09 10:38:46.370827] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.331 Malloc0 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.331 Malloc1 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.331 [2024-12-09 10:38:46.473996] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.331 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:27:45.588 00:27:45.588 Discovery Log Number of Records 2, Generation counter 2 00:27:45.588 =====Discovery Log Entry 0====== 00:27:45.588 trtype: tcp 00:27:45.588 adrfam: ipv4 00:27:45.588 subtype: current discovery subsystem 00:27:45.588 treq: not required 00:27:45.588 portid: 0 00:27:45.588 trsvcid: 4420 00:27:45.588 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:45.588 traddr: 10.0.0.2 00:27:45.588 eflags: explicit discovery connections, duplicate discovery information 00:27:45.588 sectype: none 00:27:45.588 =====Discovery Log Entry 1====== 00:27:45.588 trtype: tcp 00:27:45.588 adrfam: ipv4 00:27:45.588 subtype: nvme subsystem 00:27:45.588 treq: not required 00:27:45.588 portid: 0 00:27:45.588 trsvcid: 4420 00:27:45.588 subnqn: nqn.2016-06.io.spdk:cnode1 00:27:45.588 traddr: 10.0.0.2 00:27:45.588 eflags: none 00:27:45.588 sectype: none 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:27:45.588 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:46.958 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:27:46.958 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:27:46.958 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:46.958 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:27:46.958 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:27:46.958 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:48.858 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:27:48.859 /dev/nvme0n2 ]] 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:27:48.859 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:48.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:48.859 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:48.859 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:27:48.859 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:48.859 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:48.859 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:48.859 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.119 rmmod nvme_tcp 00:27:49.119 rmmod nvme_fabrics 00:27:49.119 rmmod nvme_keyring 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 622416 ']' 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 622416 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 622416 ']' 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 622416 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 622416 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 622416' 00:27:49.119 killing process with pid 622416 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 622416 00:27:49.119 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 622416 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.377 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:51.912 00:27:51.912 real 0m12.144s 00:27:51.912 user 0m18.113s 00:27:51.912 sys 0m4.755s 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:51.912 ************************************ 00:27:51.912 END TEST nvmf_nvme_cli 00:27:51.912 ************************************ 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:51.912 ************************************ 00:27:51.912 START TEST nvmf_vfio_user 00:27:51.912 ************************************ 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:27:51.912 * Looking for test storage... 00:27:51.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:51.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.912 --rc genhtml_branch_coverage=1 00:27:51.912 --rc genhtml_function_coverage=1 00:27:51.912 --rc genhtml_legend=1 00:27:51.912 --rc geninfo_all_blocks=1 00:27:51.912 --rc geninfo_unexecuted_blocks=1 00:27:51.912 00:27:51.912 ' 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:51.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.912 --rc genhtml_branch_coverage=1 00:27:51.912 --rc genhtml_function_coverage=1 00:27:51.912 --rc genhtml_legend=1 00:27:51.912 --rc geninfo_all_blocks=1 00:27:51.912 --rc geninfo_unexecuted_blocks=1 00:27:51.912 00:27:51.912 ' 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:51.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.912 --rc genhtml_branch_coverage=1 00:27:51.912 --rc genhtml_function_coverage=1 00:27:51.912 --rc genhtml_legend=1 00:27:51.912 --rc geninfo_all_blocks=1 00:27:51.912 --rc geninfo_unexecuted_blocks=1 00:27:51.912 00:27:51.912 ' 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:51.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.912 --rc genhtml_branch_coverage=1 00:27:51.912 --rc genhtml_function_coverage=1 00:27:51.912 --rc genhtml_legend=1 00:27:51.912 --rc geninfo_all_blocks=1 00:27:51.912 --rc geninfo_unexecuted_blocks=1 00:27:51.912 00:27:51.912 ' 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.912 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:51.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=623704 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 623704' 00:27:51.913 Process pid: 623704 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 623704 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 623704 ']' 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:27:51.913 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:27:51.913 [2024-12-09 10:38:52.805118] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:27:51.913 [2024-12-09 10:38:52.805166] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.913 [2024-12-09 10:38:52.869687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.913 [2024-12-09 10:38:52.912466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.913 [2024-12-09 10:38:52.912504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.913 [2024-12-09 10:38:52.912511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.913 [2024-12-09 10:38:52.912518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.913 [2024-12-09 10:38:52.912522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.913 [2024-12-09 10:38:52.914032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.913 [2024-12-09 10:38:52.914100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.913 [2024-12-09 10:38:52.914184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.913 [2024-12-09 10:38:52.914186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.913 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.913 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:27:51.913 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:27:52.848 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:27:53.107 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:27:53.107 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:27:53.107 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:53.107 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:27:53.107 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:53.366 Malloc1 00:27:53.367 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:27:53.625 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:27:53.884 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:27:54.142 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:54.142 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:27:54.142 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:27:54.142 Malloc2 00:27:54.142 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:27:54.400 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:27:54.658 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:27:54.918 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:27:54.918 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:27:54.918 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:54.918 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:27:54.918 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:27:54.919 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:27:54.919 [2024-12-09 10:38:55.908301] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:27:54.919 [2024-12-09 10:38:55.908348] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624186 ] 00:27:54.919 [2024-12-09 10:38:55.955694] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:27:54.919 [2024-12-09 10:38:55.964352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:27:54.919 [2024-12-09 10:38:55.964377] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa00089b000 00:27:54.919 [2024-12-09 10:38:55.965353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:54.919 [2024-12-09 10:38:55.966349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:54.919 [2024-12-09 10:38:55.967362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:54.919 [2024-12-09 10:38:55.968358] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:54.919 [2024-12-09 10:38:55.969364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:54.919 [2024-12-09 10:38:55.970373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:54.919 [2024-12-09 10:38:55.971377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:54.919 [2024-12-09 10:38:55.972386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:54.919 [2024-12-09 10:38:55.973399] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:27:54.919 [2024-12-09 10:38:55.973409] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa000890000 00:27:54.919 [2024-12-09 10:38:55.974524] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:27:54.919 [2024-12-09 10:38:55.990171] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:27:54.919 [2024-12-09 10:38:55.990197] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:27:54.919 [2024-12-09 10:38:55.992503] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:27:54.919 [2024-12-09 10:38:55.992538] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:27:54.919 [2024-12-09 10:38:55.992615] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:27:54.919 [2024-12-09 10:38:55.992631] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:27:54.919 [2024-12-09 10:38:55.992637] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:27:54.919 [2024-12-09 10:38:55.993507] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:27:54.919 [2024-12-09 10:38:55.993518] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:27:54.919 [2024-12-09 10:38:55.993525] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:27:54.919 [2024-12-09 10:38:55.994512] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:27:54.919 [2024-12-09 10:38:55.994521] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:27:54.919 [2024-12-09 10:38:55.994528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:27:54.919 [2024-12-09 10:38:55.995516] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:27:54.919 [2024-12-09 10:38:55.995524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:54.919 [2024-12-09 10:38:55.996522] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:27:54.919 [2024-12-09 10:38:55.996530] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:27:54.919 [2024-12-09 10:38:55.996535] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:27:54.919 [2024-12-09 10:38:55.996541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:54.919 [2024-12-09 10:38:55.996646] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:27:54.919 [2024-12-09 10:38:55.996650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:54.919 [2024-12-09 10:38:55.996655] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:27:54.919 [2024-12-09 10:38:55.997528] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:27:54.919 [2024-12-09 10:38:55.998537] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:27:54.919 [2024-12-09 10:38:55.999539] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:27:54.919 [2024-12-09 10:38:56.000534] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:54.919 [2024-12-09 10:38:56.000598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:54.919 [2024-12-09 10:38:56.001549] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:27:54.919 [2024-12-09 10:38:56.001556] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:54.919 [2024-12-09 10:38:56.001563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:27:54.919 [2024-12-09 10:38:56.001581] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:27:54.919 [2024-12-09 10:38:56.001587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:27:54.919 [2024-12-09 10:38:56.001605] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:54.919 [2024-12-09 10:38:56.001610] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:54.919 [2024-12-09 10:38:56.001613] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:54.919 [2024-12-09 10:38:56.001626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:54.919 [2024-12-09 10:38:56.001661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:27:54.919 [2024-12-09 10:38:56.001672] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:27:54.919 [2024-12-09 10:38:56.001678] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:27:54.919 [2024-12-09 10:38:56.001682] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:27:54.919 [2024-12-09 10:38:56.001686] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:27:54.919 [2024-12-09 10:38:56.001691] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:27:54.920 [2024-12-09 10:38:56.001695] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:27:54.920 [2024-12-09 10:38:56.001699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:27:54.920 [2024-12-09 10:38:56.001729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:27:54.920 [2024-12-09 10:38:56.001739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.920 [2024-12-09 10:38:56.001747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.920 [2024-12-09 10:38:56.001754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.920 [2024-12-09 10:38:56.001762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.920 [2024-12-09 10:38:56.001766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:27:54.920 [2024-12-09 10:38:56.001791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:27:54.920 [2024-12-09 10:38:56.001797] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:27:54.920 [2024-12-09 10:38:56.001802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:54.920 [2024-12-09 10:38:56.001832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:27:54.920 [2024-12-09 10:38:56.001885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001899] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:27:54.920 [2024-12-09 10:38:56.001903] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:27:54.920 [2024-12-09 10:38:56.001906] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:54.920 [2024-12-09 10:38:56.001912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:27:54.920 [2024-12-09 10:38:56.001924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:27:54.920 [2024-12-09 10:38:56.001937] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:27:54.920 [2024-12-09 10:38:56.001946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.001960] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:54.920 [2024-12-09 10:38:56.001964] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:54.920 [2024-12-09 10:38:56.001967] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:54.920 [2024-12-09 10:38:56.001972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:54.920 [2024-12-09 10:38:56.001992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:27:54.920 [2024-12-09 10:38:56.002009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.002016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.002022] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:54.920 [2024-12-09 10:38:56.002028] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:54.920 [2024-12-09 10:38:56.002031] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:54.920 [2024-12-09 10:38:56.002037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:54.920 [2024-12-09 10:38:56.002046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:27:54.920 [2024-12-09 10:38:56.002054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.002060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.002067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.002074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.002079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.002083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.002088] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:27:54.920 [2024-12-09 10:38:56.002092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:27:54.920 [2024-12-09 10:38:56.002097] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:27:54.920 [2024-12-09 10:38:56.002114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:27:54.920 [2024-12-09 10:38:56.002123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:27:54.920 [2024-12-09 10:38:56.002133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:27:54.920 [2024-12-09 10:38:56.002145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:27:54.920 [2024-12-09 10:38:56.002155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:27:54.920 [2024-12-09 10:38:56.002165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:27:54.920 [2024-12-09 10:38:56.002175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:54.920 [2024-12-09 10:38:56.002184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:27:54.920 [2024-12-09 10:38:56.002195] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:27:54.920 [2024-12-09 10:38:56.002199] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:27:54.921 [2024-12-09 10:38:56.002202] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:27:54.921 [2024-12-09 10:38:56.002206] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:27:54.921 [2024-12-09 10:38:56.002209] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:27:54.921 [2024-12-09 10:38:56.002215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:27:54.921 [2024-12-09 10:38:56.002224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:27:54.921 [2024-12-09 10:38:56.002228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:27:54.921 [2024-12-09 10:38:56.002232] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:54.921 [2024-12-09 10:38:56.002237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:27:54.921 [2024-12-09 10:38:56.002243] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:27:54.921 [2024-12-09 10:38:56.002247] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:54.921 [2024-12-09 10:38:56.002250] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:54.921 [2024-12-09 10:38:56.002255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:54.921 [2024-12-09 10:38:56.002262] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:27:54.921 [2024-12-09 10:38:56.002266] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:27:54.921 [2024-12-09 10:38:56.002269] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:54.921 [2024-12-09 10:38:56.002274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:27:54.921 [2024-12-09 10:38:56.002280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:27:54.921 [2024-12-09 10:38:56.002291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:27:54.921 [2024-12-09 10:38:56.002301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:27:54.921 [2024-12-09 10:38:56.002307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:27:54.921 ===================================================== 00:27:54.921 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:27:54.921 ===================================================== 00:27:54.921 Controller Capabilities/Features 00:27:54.921 ================================ 00:27:54.921 Vendor ID: 4e58 00:27:54.921 Subsystem Vendor ID: 4e58 00:27:54.921 Serial Number: SPDK1 00:27:54.921 Model Number: SPDK bdev Controller 00:27:54.921 Firmware Version: 25.01 00:27:54.921 Recommended Arb Burst: 6 00:27:54.921 IEEE OUI Identifier: 8d 6b 50 00:27:54.921 Multi-path I/O 00:27:54.921 May have multiple subsystem ports: Yes 00:27:54.921 May have multiple controllers: Yes 00:27:54.921 Associated with SR-IOV VF: No 00:27:54.921 Max Data Transfer Size: 131072 00:27:54.921 Max Number of Namespaces: 32 00:27:54.921 Max Number of I/O Queues: 127 00:27:54.921 NVMe Specification Version (VS): 1.3 00:27:54.921 NVMe Specification Version (Identify): 1.3 00:27:54.921 Maximum Queue Entries: 256 00:27:54.921 Contiguous Queues Required: Yes 00:27:54.921 Arbitration Mechanisms Supported 00:27:54.921 Weighted Round Robin: Not Supported 00:27:54.921 Vendor Specific: Not Supported 00:27:54.921 Reset Timeout: 15000 ms 00:27:54.921 Doorbell Stride: 4 bytes 00:27:54.921 NVM Subsystem Reset: Not Supported 00:27:54.921 Command Sets Supported 00:27:54.921 NVM Command Set: Supported 00:27:54.921 Boot Partition: Not Supported 00:27:54.921 Memory Page Size Minimum: 4096 bytes 00:27:54.921 Memory Page Size Maximum: 4096 bytes 00:27:54.921 Persistent Memory Region: Not Supported 00:27:54.921 Optional Asynchronous Events Supported 00:27:54.921 Namespace Attribute Notices: Supported 00:27:54.921 Firmware Activation Notices: Not Supported 00:27:54.921 ANA Change Notices: Not Supported 00:27:54.921 PLE Aggregate Log Change Notices: Not Supported 00:27:54.921 LBA Status Info Alert Notices: Not Supported 00:27:54.921 EGE Aggregate Log Change Notices: Not Supported 00:27:54.921 Normal NVM Subsystem Shutdown event: Not Supported 00:27:54.921 Zone Descriptor Change Notices: Not Supported 00:27:54.921 Discovery Log Change Notices: Not Supported 00:27:54.921 Controller Attributes 00:27:54.921 128-bit Host Identifier: Supported 00:27:54.921 Non-Operational Permissive Mode: Not Supported 00:27:54.921 NVM Sets: Not Supported 00:27:54.921 Read Recovery Levels: Not Supported 00:27:54.921 Endurance Groups: Not Supported 00:27:54.921 Predictable Latency Mode: Not Supported 00:27:54.921 Traffic Based Keep ALive: Not Supported 00:27:54.921 Namespace Granularity: Not Supported 00:27:54.921 SQ Associations: Not Supported 00:27:54.921 UUID List: Not Supported 00:27:54.921 Multi-Domain Subsystem: Not Supported 00:27:54.921 Fixed Capacity Management: Not Supported 00:27:54.921 Variable Capacity Management: Not Supported 00:27:54.921 Delete Endurance Group: Not Supported 00:27:54.921 Delete NVM Set: Not Supported 00:27:54.921 Extended LBA Formats Supported: Not Supported 00:27:54.921 Flexible Data Placement Supported: Not Supported 00:27:54.921 00:27:54.921 Controller Memory Buffer Support 00:27:54.921 ================================ 00:27:54.921 Supported: No 00:27:54.921 00:27:54.921 Persistent Memory Region Support 00:27:54.921 ================================ 00:27:54.921 Supported: No 00:27:54.921 00:27:54.921 Admin Command Set Attributes 00:27:54.921 ============================ 00:27:54.921 Security Send/Receive: Not Supported 00:27:54.921 Format NVM: Not Supported 00:27:54.921 Firmware Activate/Download: Not Supported 00:27:54.921 Namespace Management: Not Supported 00:27:54.921 Device Self-Test: Not Supported 00:27:54.921 Directives: Not Supported 00:27:54.921 NVMe-MI: Not Supported 00:27:54.921 Virtualization Management: Not Supported 00:27:54.921 Doorbell Buffer Config: Not Supported 00:27:54.921 Get LBA Status Capability: Not Supported 00:27:54.921 Command & Feature Lockdown Capability: Not Supported 00:27:54.921 Abort Command Limit: 4 00:27:54.921 Async Event Request Limit: 4 00:27:54.921 Number of Firmware Slots: N/A 00:27:54.921 Firmware Slot 1 Read-Only: N/A 00:27:54.921 Firmware Activation Without Reset: N/A 00:27:54.921 Multiple Update Detection Support: N/A 00:27:54.921 Firmware Update Granularity: No Information Provided 00:27:54.921 Per-Namespace SMART Log: No 00:27:54.921 Asymmetric Namespace Access Log Page: Not Supported 00:27:54.922 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:27:54.922 Command Effects Log Page: Supported 00:27:54.922 Get Log Page Extended Data: Supported 00:27:54.922 Telemetry Log Pages: Not Supported 00:27:54.922 Persistent Event Log Pages: Not Supported 00:27:54.922 Supported Log Pages Log Page: May Support 00:27:54.922 Commands Supported & Effects Log Page: Not Supported 00:27:54.922 Feature Identifiers & Effects Log Page:May Support 00:27:54.922 NVMe-MI Commands & Effects Log Page: May Support 00:27:54.922 Data Area 4 for Telemetry Log: Not Supported 00:27:54.922 Error Log Page Entries Supported: 128 00:27:54.922 Keep Alive: Supported 00:27:54.922 Keep Alive Granularity: 10000 ms 00:27:54.922 00:27:54.922 NVM Command Set Attributes 00:27:54.922 ========================== 00:27:54.922 Submission Queue Entry Size 00:27:54.922 Max: 64 00:27:54.922 Min: 64 00:27:54.922 Completion Queue Entry Size 00:27:54.922 Max: 16 00:27:54.922 Min: 16 00:27:54.922 Number of Namespaces: 32 00:27:54.922 Compare Command: Supported 00:27:54.922 Write Uncorrectable Command: Not Supported 00:27:54.922 Dataset Management Command: Supported 00:27:54.922 Write Zeroes Command: Supported 00:27:54.922 Set Features Save Field: Not Supported 00:27:54.922 Reservations: Not Supported 00:27:54.922 Timestamp: Not Supported 00:27:54.922 Copy: Supported 00:27:54.922 Volatile Write Cache: Present 00:27:54.922 Atomic Write Unit (Normal): 1 00:27:54.922 Atomic Write Unit (PFail): 1 00:27:54.922 Atomic Compare & Write Unit: 1 00:27:54.922 Fused Compare & Write: Supported 00:27:54.922 Scatter-Gather List 00:27:54.922 SGL Command Set: Supported (Dword aligned) 00:27:54.922 SGL Keyed: Not Supported 00:27:54.922 SGL Bit Bucket Descriptor: Not Supported 00:27:54.922 SGL Metadata Pointer: Not Supported 00:27:54.922 Oversized SGL: Not Supported 00:27:54.922 SGL Metadata Address: Not Supported 00:27:54.922 SGL Offset: Not Supported 00:27:54.922 Transport SGL Data Block: Not Supported 00:27:54.922 Replay Protected Memory Block: Not Supported 00:27:54.922 00:27:54.922 Firmware Slot Information 00:27:54.922 ========================= 00:27:54.922 Active slot: 1 00:27:54.922 Slot 1 Firmware Revision: 25.01 00:27:54.922 00:27:54.922 00:27:54.922 Commands Supported and Effects 00:27:54.922 ============================== 00:27:54.922 Admin Commands 00:27:54.922 -------------- 00:27:54.922 Get Log Page (02h): Supported 00:27:54.922 Identify (06h): Supported 00:27:54.922 Abort (08h): Supported 00:27:54.922 Set Features (09h): Supported 00:27:54.922 Get Features (0Ah): Supported 00:27:54.922 Asynchronous Event Request (0Ch): Supported 00:27:54.922 Keep Alive (18h): Supported 00:27:54.922 I/O Commands 00:27:54.922 ------------ 00:27:54.922 Flush (00h): Supported LBA-Change 00:27:54.922 Write (01h): Supported LBA-Change 00:27:54.922 Read (02h): Supported 00:27:54.922 Compare (05h): Supported 00:27:54.922 Write Zeroes (08h): Supported LBA-Change 00:27:54.922 Dataset Management (09h): Supported LBA-Change 00:27:54.922 Copy (19h): Supported LBA-Change 00:27:54.922 00:27:54.922 Error Log 00:27:54.922 ========= 00:27:54.922 00:27:54.922 Arbitration 00:27:54.922 =========== 00:27:54.922 Arbitration Burst: 1 00:27:54.922 00:27:54.922 Power Management 00:27:54.922 ================ 00:27:54.922 Number of Power States: 1 00:27:54.922 Current Power State: Power State #0 00:27:54.922 Power State #0: 00:27:54.922 Max Power: 0.00 W 00:27:54.922 Non-Operational State: Operational 00:27:54.922 Entry Latency: Not Reported 00:27:54.922 Exit Latency: Not Reported 00:27:54.922 Relative Read Throughput: 0 00:27:54.922 Relative Read Latency: 0 00:27:54.922 Relative Write Throughput: 0 00:27:54.922 Relative Write Latency: 0 00:27:54.922 Idle Power: Not Reported 00:27:54.922 Active Power: Not Reported 00:27:54.922 Non-Operational Permissive Mode: Not Supported 00:27:54.922 00:27:54.922 Health Information 00:27:54.922 ================== 00:27:54.922 Critical Warnings: 00:27:54.922 Available Spare Space: OK 00:27:54.922 Temperature: OK 00:27:54.922 Device Reliability: OK 00:27:54.922 Read Only: No 00:27:54.922 Volatile Memory Backup: OK 00:27:54.922 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:54.922 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:54.922 Available Spare: 0% 00:27:54.922 Available Sp[2024-12-09 10:38:56.002395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:27:54.922 [2024-12-09 10:38:56.002407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:27:54.922 [2024-12-09 10:38:56.002436] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:27:54.922 [2024-12-09 10:38:56.002445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.922 [2024-12-09 10:38:56.002451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.922 [2024-12-09 10:38:56.002457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.922 [2024-12-09 10:38:56.002462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.922 [2024-12-09 10:38:56.002556] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:27:54.922 [2024-12-09 10:38:56.002566] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:27:54.922 [2024-12-09 10:38:56.003562] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:54.922 [2024-12-09 10:38:56.003608] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:27:54.922 [2024-12-09 10:38:56.003618] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:27:54.922 [2024-12-09 10:38:56.004564] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:27:54.922 [2024-12-09 10:38:56.004575] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:27:54.923 [2024-12-09 10:38:56.004624] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:27:54.923 [2024-12-09 10:38:56.010008] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:27:55.182 are Threshold: 0% 00:27:55.182 Life Percentage Used: 0% 00:27:55.182 Data Units Read: 0 00:27:55.182 Data Units Written: 0 00:27:55.182 Host Read Commands: 0 00:27:55.182 Host Write Commands: 0 00:27:55.182 Controller Busy Time: 0 minutes 00:27:55.182 Power Cycles: 0 00:27:55.182 Power On Hours: 0 hours 00:27:55.182 Unsafe Shutdowns: 0 00:27:55.182 Unrecoverable Media Errors: 0 00:27:55.182 Lifetime Error Log Entries: 0 00:27:55.182 Warning Temperature Time: 0 minutes 00:27:55.182 Critical Temperature Time: 0 minutes 00:27:55.182 00:27:55.182 Number of Queues 00:27:55.182 ================ 00:27:55.182 Number of I/O Submission Queues: 127 00:27:55.182 Number of I/O Completion Queues: 127 00:27:55.182 00:27:55.182 Active Namespaces 00:27:55.182 ================= 00:27:55.182 Namespace ID:1 00:27:55.182 Error Recovery Timeout: Unlimited 00:27:55.182 Command Set Identifier: NVM (00h) 00:27:55.182 Deallocate: Supported 00:27:55.182 Deallocated/Unwritten Error: Not Supported 00:27:55.182 Deallocated Read Value: Unknown 00:27:55.182 Deallocate in Write Zeroes: Not Supported 00:27:55.182 Deallocated Guard Field: 0xFFFF 00:27:55.182 Flush: Supported 00:27:55.182 Reservation: Supported 00:27:55.182 Namespace Sharing Capabilities: Multiple Controllers 00:27:55.182 Size (in LBAs): 131072 (0GiB) 00:27:55.182 Capacity (in LBAs): 131072 (0GiB) 00:27:55.182 Utilization (in LBAs): 131072 (0GiB) 00:27:55.182 NGUID: 957872BC226C4C37BC0984DB8FD75011 00:27:55.182 UUID: 957872bc-226c-4c37-bc09-84db8fd75011 00:27:55.182 Thin Provisioning: Not Supported 00:27:55.182 Per-NS Atomic Units: Yes 00:27:55.182 Atomic Boundary Size (Normal): 0 00:27:55.182 Atomic Boundary Size (PFail): 0 00:27:55.182 Atomic Boundary Offset: 0 00:27:55.182 Maximum Single Source Range Length: 65535 00:27:55.182 Maximum Copy Length: 65535 00:27:55.182 Maximum Source Range Count: 1 00:27:55.182 NGUID/EUI64 Never Reused: No 00:27:55.182 Namespace Write Protected: No 00:27:55.182 Number of LBA Formats: 1 00:27:55.182 Current LBA Format: LBA Format #00 00:27:55.182 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:55.182 00:27:55.182 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:27:55.182 [2024-12-09 10:38:56.331562] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:28:00.461 Initializing NVMe Controllers 00:28:00.461 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:28:00.461 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:28:00.461 Initialization complete. Launching workers. 00:28:00.461 ======================================================== 00:28:00.461 Latency(us) 00:28:00.461 Device Information : IOPS MiB/s Average min max 00:28:00.461 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39923.40 155.95 3206.17 977.16 9261.37 00:28:00.461 ======================================================== 00:28:00.461 Total : 39923.40 155.95 3206.17 977.16 9261.37 00:28:00.461 00:28:00.461 [2024-12-09 10:39:01.352879] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:28:00.461 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:28:00.721 [2024-12-09 10:39:01.688293] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:28:05.992 Initializing NVMe Controllers 00:28:05.992 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:28:05.992 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:28:05.992 Initialization complete. Launching workers. 00:28:05.992 ======================================================== 00:28:05.992 Latency(us) 00:28:05.992 Device Information : IOPS MiB/s Average min max 00:28:05.992 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15974.46 62.40 8018.15 5829.71 15966.36 00:28:05.992 ======================================================== 00:28:05.992 Total : 15974.46 62.40 8018.15 5829.71 15966.36 00:28:05.992 00:28:05.992 [2024-12-09 10:39:06.727016] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:28:05.992 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:28:05.992 [2024-12-09 10:39:07.025280] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:28:11.278 [2024-12-09 10:39:12.085237] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:28:11.278 Initializing NVMe Controllers 00:28:11.278 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:28:11.278 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:28:11.278 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:28:11.278 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:28:11.278 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:28:11.278 Initialization complete. Launching workers. 00:28:11.278 Starting thread on core 2 00:28:11.278 Starting thread on core 3 00:28:11.278 Starting thread on core 1 00:28:11.278 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:28:11.536 [2024-12-09 10:39:12.480433] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:28:14.889 [2024-12-09 10:39:15.544340] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:28:14.889 Initializing NVMe Controllers 00:28:14.889 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:28:14.889 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:28:14.889 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:28:14.889 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:28:14.889 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:28:14.889 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:28:14.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:28:14.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:28:14.889 Initialization complete. Launching workers. 00:28:14.889 Starting thread on core 1 with urgent priority queue 00:28:14.889 Starting thread on core 2 with urgent priority queue 00:28:14.889 Starting thread on core 3 with urgent priority queue 00:28:14.889 Starting thread on core 0 with urgent priority queue 00:28:14.889 SPDK bdev Controller (SPDK1 ) core 0: 8671.67 IO/s 11.53 secs/100000 ios 00:28:14.889 SPDK bdev Controller (SPDK1 ) core 1: 8926.33 IO/s 11.20 secs/100000 ios 00:28:14.889 SPDK bdev Controller (SPDK1 ) core 2: 8614.00 IO/s 11.61 secs/100000 ios 00:28:14.889 SPDK bdev Controller (SPDK1 ) core 3: 7762.00 IO/s 12.88 secs/100000 ios 00:28:14.889 ======================================================== 00:28:14.889 00:28:14.889 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:28:14.889 [2024-12-09 10:39:15.926443] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:28:14.889 Initializing NVMe Controllers 00:28:14.889 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:28:14.889 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:28:14.889 Namespace ID: 1 size: 0GB 00:28:14.889 Initialization complete. 00:28:14.889 INFO: using host memory buffer for IO 00:28:14.889 Hello world! 00:28:14.889 [2024-12-09 10:39:15.963683] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:28:15.148 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:28:15.407 [2024-12-09 10:39:16.350440] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:28:16.346 Initializing NVMe Controllers 00:28:16.346 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:28:16.346 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:28:16.346 Initialization complete. Launching workers. 00:28:16.346 submit (in ns) avg, min, max = 8719.0, 3205.2, 4000666.1 00:28:16.346 complete (in ns) avg, min, max = 20052.6, 1830.4, 4996955.7 00:28:16.346 00:28:16.346 Submit histogram 00:28:16.346 ================ 00:28:16.346 Range in us Cumulative Count 00:28:16.346 3.200 - 3.214: 0.0062% ( 1) 00:28:16.346 3.270 - 3.283: 0.0124% ( 1) 00:28:16.346 3.283 - 3.297: 0.0373% ( 4) 00:28:16.346 3.297 - 3.311: 0.1429% ( 17) 00:28:16.346 3.311 - 3.325: 0.3293% ( 30) 00:28:16.346 3.325 - 3.339: 0.7393% ( 66) 00:28:16.346 3.339 - 3.353: 2.5224% ( 287) 00:28:16.346 3.353 - 3.367: 6.8713% ( 700) 00:28:16.346 3.367 - 3.381: 12.4441% ( 897) 00:28:16.346 3.381 - 3.395: 18.8618% ( 1033) 00:28:16.346 3.395 - 3.409: 25.4349% ( 1058) 00:28:16.346 3.409 - 3.423: 31.4302% ( 965) 00:28:16.346 3.423 - 3.437: 36.8166% ( 867) 00:28:16.346 3.437 - 3.450: 42.1595% ( 860) 00:28:16.346 3.450 - 3.464: 47.2726% ( 823) 00:28:16.346 3.464 - 3.478: 51.3668% ( 659) 00:28:16.346 3.478 - 3.492: 55.4796% ( 662) 00:28:16.346 3.492 - 3.506: 61.6613% ( 995) 00:28:16.346 3.506 - 3.520: 67.6379% ( 962) 00:28:16.346 3.520 - 3.534: 71.7942% ( 669) 00:28:16.346 3.534 - 3.548: 76.5656% ( 768) 00:28:16.346 3.548 - 3.562: 80.7965% ( 681) 00:28:16.346 3.562 - 3.590: 85.6051% ( 774) 00:28:16.346 3.590 - 3.617: 86.9968% ( 224) 00:28:16.346 3.617 - 3.645: 87.6553% ( 106) 00:28:16.346 3.645 - 3.673: 88.9786% ( 213) 00:28:16.346 3.673 - 3.701: 90.8487% ( 301) 00:28:16.346 3.701 - 3.729: 92.3894% ( 248) 00:28:16.346 3.729 - 3.757: 94.2160% ( 294) 00:28:16.346 3.757 - 3.784: 96.0860% ( 301) 00:28:16.346 3.784 - 3.812: 97.6019% ( 244) 00:28:16.346 3.812 - 3.840: 98.4095% ( 130) 00:28:16.346 3.840 - 3.868: 99.0060% ( 96) 00:28:16.346 3.868 - 3.896: 99.3166% ( 50) 00:28:16.346 3.896 - 3.923: 99.4968% ( 29) 00:28:16.346 3.923 - 3.951: 99.5589% ( 10) 00:28:16.346 3.951 - 3.979: 99.5775% ( 3) 00:28:16.346 3.979 - 4.007: 99.5900% ( 2) 00:28:16.346 4.035 - 4.063: 99.5962% ( 1) 00:28:16.346 4.063 - 4.090: 99.6024% ( 1) 00:28:16.346 4.369 - 4.397: 99.6086% ( 1) 00:28:16.346 5.426 - 5.454: 99.6148% ( 1) 00:28:16.346 5.454 - 5.482: 99.6210% ( 1) 00:28:16.346 5.482 - 5.510: 99.6272% ( 1) 00:28:16.346 5.537 - 5.565: 99.6397% ( 2) 00:28:16.347 5.621 - 5.649: 99.6459% ( 1) 00:28:16.347 5.760 - 5.788: 99.6521% ( 1) 00:28:16.347 5.899 - 5.927: 99.6583% ( 1) 00:28:16.347 6.066 - 6.094: 99.6645% ( 1) 00:28:16.347 6.428 - 6.456: 99.6707% ( 1) 00:28:16.347 6.511 - 6.539: 99.6769% ( 1) 00:28:16.347 6.790 - 6.817: 99.6832% ( 1) 00:28:16.347 6.845 - 6.873: 99.6956% ( 2) 00:28:16.347 6.873 - 6.901: 99.7018% ( 1) 00:28:16.347 6.901 - 6.929: 99.7080% ( 1) 00:28:16.347 7.012 - 7.040: 99.7266% ( 3) 00:28:16.347 7.123 - 7.179: 99.7329% ( 1) 00:28:16.347 7.179 - 7.235: 99.7391% ( 1) 00:28:16.347 7.402 - 7.457: 99.7453% ( 1) 00:28:16.347 7.624 - 7.680: 99.7577% ( 2) 00:28:16.347 7.736 - 7.791: 99.7639% ( 1) 00:28:16.347 7.791 - 7.847: 99.7701% ( 1) 00:28:16.347 7.847 - 7.903: 99.7763% ( 1) 00:28:16.347 7.958 - 8.014: 99.7826% ( 1) 00:28:16.347 8.014 - 8.070: 99.7888% ( 1) 00:28:16.347 8.070 - 8.125: 99.7950% ( 1) 00:28:16.347 8.237 - 8.292: 99.8012% ( 1) 00:28:16.347 8.348 - 8.403: 99.8074% ( 1) 00:28:16.347 8.626 - 8.682: 99.8136% ( 1) 00:28:16.347 8.682 - 8.737: 99.8198% ( 1) 00:28:16.347 8.793 - 8.849: 99.8260% ( 1) 00:28:16.347 8.904 - 8.960: 99.8323% ( 1) 00:28:16.347 9.016 - 9.071: 99.8447% ( 2) 00:28:16.347 9.127 - 9.183: 99.8509% ( 1) 00:28:16.347 9.572 - 9.628: 99.8571% ( 1) 00:28:16.347 10.129 - 10.184: 99.8633% ( 1) 00:28:16.347 41.628 - 41.850: 99.8695% ( 1) 00:28:16.347 3989.148 - 4017.642: 100.0000% ( 21) 00:28:16.347 00:28:16.347 Complete histogram 00:28:16.347 ================== 00:28:16.347 Ra[2024-12-09 10:39:17.372567] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:28:16.347 nge in us Cumulative Count 00:28:16.347 1.823 - 1.837: 0.0621% ( 10) 00:28:16.347 1.837 - 1.850: 0.8946% ( 134) 00:28:16.347 1.850 - 1.864: 2.5721% ( 270) 00:28:16.347 1.864 - 1.878: 3.8581% ( 207) 00:28:16.347 1.878 - 1.892: 11.2575% ( 1191) 00:28:16.347 1.892 - 1.906: 59.3129% ( 7735) 00:28:16.347 1.906 - 1.920: 86.0524% ( 4304) 00:28:16.347 1.920 - 1.934: 92.7311% ( 1075) 00:28:16.347 1.934 - 1.948: 95.0671% ( 376) 00:28:16.347 1.948 - 1.962: 95.7567% ( 111) 00:28:16.347 1.962 - 1.976: 97.2291% ( 237) 00:28:16.347 1.976 - 1.990: 98.5276% ( 209) 00:28:16.347 1.990 - 2.003: 98.9811% ( 73) 00:28:16.347 2.003 - 2.017: 99.1426% ( 26) 00:28:16.347 2.017 - 2.031: 99.2420% ( 16) 00:28:16.347 2.031 - 2.045: 99.2607% ( 3) 00:28:16.347 2.045 - 2.059: 99.2793% ( 3) 00:28:16.347 2.059 - 2.073: 99.2917% ( 2) 00:28:16.347 2.073 - 2.087: 99.3042% ( 2) 00:28:16.347 2.115 - 2.129: 99.3104% ( 1) 00:28:16.347 2.129 - 2.143: 99.3166% ( 1) 00:28:16.347 2.143 - 2.157: 99.3228% ( 1) 00:28:16.347 2.170 - 2.184: 99.3290% ( 1) 00:28:16.347 2.184 - 2.198: 99.3352% ( 1) 00:28:16.347 2.240 - 2.254: 99.3415% ( 1) 00:28:16.347 2.365 - 2.379: 99.3477% ( 1) 00:28:16.347 2.421 - 2.435: 99.3539% ( 1) 00:28:16.347 2.435 - 2.449: 99.3601% ( 1) 00:28:16.347 3.757 - 3.784: 99.3663% ( 1) 00:28:16.347 3.784 - 3.812: 99.3725% ( 1) 00:28:16.347 4.174 - 4.202: 99.3787% ( 1) 00:28:16.347 4.870 - 4.897: 99.3849% ( 1) 00:28:16.347 5.009 - 5.037: 99.3912% ( 1) 00:28:16.347 5.287 - 5.315: 99.3974% ( 1) 00:28:16.347 5.343 - 5.370: 99.4036% ( 1) 00:28:16.347 5.537 - 5.565: 99.4098% ( 1) 00:28:16.347 5.621 - 5.649: 99.4160% ( 1) 00:28:16.347 5.760 - 5.788: 99.4222% ( 1) 00:28:16.347 5.927 - 5.955: 99.4284% ( 1) 00:28:16.347 6.010 - 6.038: 99.4409% ( 2) 00:28:16.347 6.038 - 6.066: 99.4471% ( 1) 00:28:16.347 6.094 - 6.122: 99.4533% ( 1) 00:28:16.347 6.205 - 6.233: 99.4595% ( 1) 00:28:16.347 6.456 - 6.483: 99.4657% ( 1) 00:28:16.347 6.483 - 6.511: 99.4719% ( 1) 00:28:16.347 6.511 - 6.539: 99.4781% ( 1) 00:28:16.347 6.623 - 6.650: 99.4843% ( 1) 00:28:16.347 6.678 - 6.706: 99.4906% ( 1) 00:28:16.347 6.845 - 6.873: 99.4968% ( 1) 00:28:16.347 6.901 - 6.929: 99.5030% ( 1) 00:28:16.347 7.012 - 7.040: 99.5092% ( 1) 00:28:16.347 7.040 - 7.068: 99.5154% ( 1) 00:28:16.347 7.096 - 7.123: 99.5216% ( 1) 00:28:16.347 7.346 - 7.402: 99.5278% ( 1) 00:28:16.347 9.127 - 9.183: 99.5340% ( 1) 00:28:16.347 12.299 - 12.355: 99.5403% ( 1) 00:28:16.347 16.584 - 16.696: 99.5465% ( 1) 00:28:16.347 3305.294 - 3319.541: 99.5527% ( 1) 00:28:16.347 3989.148 - 4017.642: 99.9938% ( 71) 00:28:16.347 4986.435 - 5014.929: 100.0000% ( 1) 00:28:16.347 00:28:16.347 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:28:16.347 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:28:16.347 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:28:16.347 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:28:16.347 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:28:16.607 [ 00:28:16.607 { 00:28:16.607 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:16.607 "subtype": "Discovery", 00:28:16.607 "listen_addresses": [], 00:28:16.607 "allow_any_host": true, 00:28:16.607 "hosts": [] 00:28:16.607 }, 00:28:16.607 { 00:28:16.607 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:28:16.607 "subtype": "NVMe", 00:28:16.607 "listen_addresses": [ 00:28:16.607 { 00:28:16.607 "trtype": "VFIOUSER", 00:28:16.607 "adrfam": "IPv4", 00:28:16.607 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:28:16.607 "trsvcid": "0" 00:28:16.607 } 00:28:16.607 ], 00:28:16.607 "allow_any_host": true, 00:28:16.607 "hosts": [], 00:28:16.607 "serial_number": "SPDK1", 00:28:16.607 "model_number": "SPDK bdev Controller", 00:28:16.607 "max_namespaces": 32, 00:28:16.607 "min_cntlid": 1, 00:28:16.607 "max_cntlid": 65519, 00:28:16.607 "namespaces": [ 00:28:16.607 { 00:28:16.607 "nsid": 1, 00:28:16.607 "bdev_name": "Malloc1", 00:28:16.607 "name": "Malloc1", 00:28:16.607 "nguid": "957872BC226C4C37BC0984DB8FD75011", 00:28:16.607 "uuid": "957872bc-226c-4c37-bc09-84db8fd75011" 00:28:16.607 } 00:28:16.607 ] 00:28:16.607 }, 00:28:16.607 { 00:28:16.607 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:28:16.607 "subtype": "NVMe", 00:28:16.607 "listen_addresses": [ 00:28:16.607 { 00:28:16.607 "trtype": "VFIOUSER", 00:28:16.607 "adrfam": "IPv4", 00:28:16.607 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:28:16.607 "trsvcid": "0" 00:28:16.607 } 00:28:16.607 ], 00:28:16.607 "allow_any_host": true, 00:28:16.607 "hosts": [], 00:28:16.607 "serial_number": "SPDK2", 00:28:16.607 "model_number": "SPDK bdev Controller", 00:28:16.607 "max_namespaces": 32, 00:28:16.607 "min_cntlid": 1, 00:28:16.607 "max_cntlid": 65519, 00:28:16.607 "namespaces": [ 00:28:16.607 { 00:28:16.607 "nsid": 1, 00:28:16.607 "bdev_name": "Malloc2", 00:28:16.607 "name": "Malloc2", 00:28:16.607 "nguid": "09B9DD07693C4DDA8B75729F16A24846", 00:28:16.607 "uuid": "09b9dd07-693c-4dda-8b75-729f16a24846" 00:28:16.607 } 00:28:16.607 ] 00:28:16.607 } 00:28:16.607 ] 00:28:16.607 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:16.607 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=628376 00:28:16.607 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:28:16.607 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:28:16.607 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:28:16.607 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:16.607 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:16.607 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:28:16.607 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:28:16.607 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:28:16.867 [2024-12-09 10:39:17.789511] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:28:16.867 Malloc3 00:28:16.867 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:28:16.867 [2024-12-09 10:39:18.023309] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:28:17.126 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:28:17.126 Asynchronous Event Request test 00:28:17.126 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:28:17.126 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:28:17.126 Registering asynchronous event callbacks... 00:28:17.126 Starting namespace attribute notice tests for all controllers... 00:28:17.126 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:17.126 aer_cb - Changed Namespace 00:28:17.126 Cleaning up... 00:28:17.126 [ 00:28:17.126 { 00:28:17.126 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:17.126 "subtype": "Discovery", 00:28:17.126 "listen_addresses": [], 00:28:17.126 "allow_any_host": true, 00:28:17.126 "hosts": [] 00:28:17.126 }, 00:28:17.126 { 00:28:17.126 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:28:17.126 "subtype": "NVMe", 00:28:17.126 "listen_addresses": [ 00:28:17.126 { 00:28:17.126 "trtype": "VFIOUSER", 00:28:17.126 "adrfam": "IPv4", 00:28:17.126 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:28:17.126 "trsvcid": "0" 00:28:17.126 } 00:28:17.126 ], 00:28:17.126 "allow_any_host": true, 00:28:17.126 "hosts": [], 00:28:17.126 "serial_number": "SPDK1", 00:28:17.126 "model_number": "SPDK bdev Controller", 00:28:17.126 "max_namespaces": 32, 00:28:17.126 "min_cntlid": 1, 00:28:17.126 "max_cntlid": 65519, 00:28:17.126 "namespaces": [ 00:28:17.126 { 00:28:17.126 "nsid": 1, 00:28:17.126 "bdev_name": "Malloc1", 00:28:17.126 "name": "Malloc1", 00:28:17.126 "nguid": "957872BC226C4C37BC0984DB8FD75011", 00:28:17.126 "uuid": "957872bc-226c-4c37-bc09-84db8fd75011" 00:28:17.126 }, 00:28:17.126 { 00:28:17.126 "nsid": 2, 00:28:17.126 "bdev_name": "Malloc3", 00:28:17.126 "name": "Malloc3", 00:28:17.126 "nguid": "E3DEB32CE854450D820490A5E3F80089", 00:28:17.126 "uuid": "e3deb32c-e854-450d-8204-90a5e3f80089" 00:28:17.126 } 00:28:17.126 ] 00:28:17.126 }, 00:28:17.126 { 00:28:17.126 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:28:17.126 "subtype": "NVMe", 00:28:17.126 "listen_addresses": [ 00:28:17.126 { 00:28:17.126 "trtype": "VFIOUSER", 00:28:17.126 "adrfam": "IPv4", 00:28:17.126 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:28:17.126 "trsvcid": "0" 00:28:17.126 } 00:28:17.126 ], 00:28:17.126 "allow_any_host": true, 00:28:17.126 "hosts": [], 00:28:17.126 "serial_number": "SPDK2", 00:28:17.126 "model_number": "SPDK bdev Controller", 00:28:17.126 "max_namespaces": 32, 00:28:17.126 "min_cntlid": 1, 00:28:17.126 "max_cntlid": 65519, 00:28:17.126 "namespaces": [ 00:28:17.126 { 00:28:17.126 "nsid": 1, 00:28:17.126 "bdev_name": "Malloc2", 00:28:17.126 "name": "Malloc2", 00:28:17.126 "nguid": "09B9DD07693C4DDA8B75729F16A24846", 00:28:17.126 "uuid": "09b9dd07-693c-4dda-8b75-729f16a24846" 00:28:17.126 } 00:28:17.126 ] 00:28:17.126 } 00:28:17.126 ] 00:28:17.126 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 628376 00:28:17.126 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:28:17.126 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:28:17.126 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:28:17.126 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:28:17.126 [2024-12-09 10:39:18.265191] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:28:17.126 [2024-12-09 10:39:18.265231] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628391 ] 00:28:17.387 [2024-12-09 10:39:18.313545] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:28:17.387 [2024-12-09 10:39:18.319228] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:28:17.387 [2024-12-09 10:39:18.319252] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f285b751000 00:28:17.387 [2024-12-09 10:39:18.320223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:28:17.387 [2024-12-09 10:39:18.321227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:28:17.387 [2024-12-09 10:39:18.322228] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:28:17.387 [2024-12-09 10:39:18.323235] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:28:17.387 [2024-12-09 10:39:18.324245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:28:17.387 [2024-12-09 10:39:18.325254] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:28:17.387 [2024-12-09 10:39:18.326263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:28:17.387 [2024-12-09 10:39:18.327265] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:28:17.387 [2024-12-09 10:39:18.328268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:28:17.387 [2024-12-09 10:39:18.328279] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f285b746000 00:28:17.387 [2024-12-09 10:39:18.329393] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:28:17.387 [2024-12-09 10:39:18.343646] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:28:17.387 [2024-12-09 10:39:18.343673] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:28:17.387 [2024-12-09 10:39:18.348759] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:28:17.387 [2024-12-09 10:39:18.348800] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:28:17.387 [2024-12-09 10:39:18.348874] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:28:17.387 [2024-12-09 10:39:18.348889] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:28:17.387 [2024-12-09 10:39:18.348894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:28:17.387 [2024-12-09 10:39:18.349769] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:28:17.387 [2024-12-09 10:39:18.349781] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:28:17.387 [2024-12-09 10:39:18.349788] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:28:17.387 [2024-12-09 10:39:18.350769] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:28:17.387 [2024-12-09 10:39:18.350778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:28:17.387 [2024-12-09 10:39:18.350788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:28:17.387 [2024-12-09 10:39:18.351784] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:28:17.387 [2024-12-09 10:39:18.351793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:17.387 [2024-12-09 10:39:18.352786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:28:17.387 [2024-12-09 10:39:18.352795] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:28:17.387 [2024-12-09 10:39:18.352800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:28:17.387 [2024-12-09 10:39:18.352806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:17.387 [2024-12-09 10:39:18.352911] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:28:17.387 [2024-12-09 10:39:18.352915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:17.387 [2024-12-09 10:39:18.352920] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:28:17.387 [2024-12-09 10:39:18.353801] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:28:17.387 [2024-12-09 10:39:18.354803] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:28:17.387 [2024-12-09 10:39:18.355810] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:28:17.387 [2024-12-09 10:39:18.356806] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:28:17.387 [2024-12-09 10:39:18.356843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:17.387 [2024-12-09 10:39:18.357816] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:28:17.388 [2024-12-09 10:39:18.357825] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:17.388 [2024-12-09 10:39:18.357830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.357847] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:28:17.388 [2024-12-09 10:39:18.357854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.357868] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:28:17.388 [2024-12-09 10:39:18.357873] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:28:17.388 [2024-12-09 10:39:18.357876] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:28:17.388 [2024-12-09 10:39:18.357887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.365007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.365021] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:28:17.388 [2024-12-09 10:39:18.365026] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:28:17.388 [2024-12-09 10:39:18.365030] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:28:17.388 [2024-12-09 10:39:18.365034] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:28:17.388 [2024-12-09 10:39:18.365039] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:28:17.388 [2024-12-09 10:39:18.365043] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:28:17.388 [2024-12-09 10:39:18.365047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.365055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.365064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.373003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.373015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.388 [2024-12-09 10:39:18.373023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.388 [2024-12-09 10:39:18.373031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.388 [2024-12-09 10:39:18.373038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.388 [2024-12-09 10:39:18.373043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.373051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.373060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.381005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.381015] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:28:17.388 [2024-12-09 10:39:18.381020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.381027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.381032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.381041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.389004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.389064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.389072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.389079] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:28:17.388 [2024-12-09 10:39:18.389083] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:28:17.388 [2024-12-09 10:39:18.389086] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:28:17.388 [2024-12-09 10:39:18.389092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.397004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.397015] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:28:17.388 [2024-12-09 10:39:18.397028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.397035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.397041] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:28:17.388 [2024-12-09 10:39:18.397045] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:28:17.388 [2024-12-09 10:39:18.397048] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:28:17.388 [2024-12-09 10:39:18.397054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.405006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.405019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.405027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.405034] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:28:17.388 [2024-12-09 10:39:18.405038] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:28:17.388 [2024-12-09 10:39:18.405041] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:28:17.388 [2024-12-09 10:39:18.405047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.413003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.413013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.413019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.413027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.413034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.413041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.413046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.413051] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:28:17.388 [2024-12-09 10:39:18.413055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:28:17.388 [2024-12-09 10:39:18.413060] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:28:17.388 [2024-12-09 10:39:18.413075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.421006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.421018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.429004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.429016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.437005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.437017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:28:17.388 [2024-12-09 10:39:18.445003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:28:17.388 [2024-12-09 10:39:18.445019] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:28:17.388 [2024-12-09 10:39:18.445024] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:28:17.388 [2024-12-09 10:39:18.445027] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:28:17.388 [2024-12-09 10:39:18.445030] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:28:17.388 [2024-12-09 10:39:18.445033] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:28:17.388 [2024-12-09 10:39:18.445039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:28:17.388 [2024-12-09 10:39:18.445046] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:28:17.388 [2024-12-09 10:39:18.445050] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:28:17.388 [2024-12-09 10:39:18.445053] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:28:17.389 [2024-12-09 10:39:18.445058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:28:17.389 [2024-12-09 10:39:18.445064] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:28:17.389 [2024-12-09 10:39:18.445068] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:28:17.389 [2024-12-09 10:39:18.445071] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:28:17.389 [2024-12-09 10:39:18.445076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:28:17.389 [2024-12-09 10:39:18.445085] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:28:17.389 [2024-12-09 10:39:18.445090] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:28:17.389 [2024-12-09 10:39:18.445093] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:28:17.389 [2024-12-09 10:39:18.445098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:28:17.389 [2024-12-09 10:39:18.453005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:28:17.389 [2024-12-09 10:39:18.453018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:28:17.389 [2024-12-09 10:39:18.453027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:28:17.389 [2024-12-09 10:39:18.453034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:28:17.389 ===================================================== 00:28:17.389 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:28:17.389 ===================================================== 00:28:17.389 Controller Capabilities/Features 00:28:17.389 ================================ 00:28:17.389 Vendor ID: 4e58 00:28:17.389 Subsystem Vendor ID: 4e58 00:28:17.389 Serial Number: SPDK2 00:28:17.389 Model Number: SPDK bdev Controller 00:28:17.389 Firmware Version: 25.01 00:28:17.389 Recommended Arb Burst: 6 00:28:17.389 IEEE OUI Identifier: 8d 6b 50 00:28:17.389 Multi-path I/O 00:28:17.389 May have multiple subsystem ports: Yes 00:28:17.389 May have multiple controllers: Yes 00:28:17.389 Associated with SR-IOV VF: No 00:28:17.389 Max Data Transfer Size: 131072 00:28:17.389 Max Number of Namespaces: 32 00:28:17.389 Max Number of I/O Queues: 127 00:28:17.389 NVMe Specification Version (VS): 1.3 00:28:17.389 NVMe Specification Version (Identify): 1.3 00:28:17.389 Maximum Queue Entries: 256 00:28:17.389 Contiguous Queues Required: Yes 00:28:17.389 Arbitration Mechanisms Supported 00:28:17.389 Weighted Round Robin: Not Supported 00:28:17.389 Vendor Specific: Not Supported 00:28:17.389 Reset Timeout: 15000 ms 00:28:17.389 Doorbell Stride: 4 bytes 00:28:17.389 NVM Subsystem Reset: Not Supported 00:28:17.389 Command Sets Supported 00:28:17.389 NVM Command Set: Supported 00:28:17.389 Boot Partition: Not Supported 00:28:17.389 Memory Page Size Minimum: 4096 bytes 00:28:17.389 Memory Page Size Maximum: 4096 bytes 00:28:17.389 Persistent Memory Region: Not Supported 00:28:17.389 Optional Asynchronous Events Supported 00:28:17.389 Namespace Attribute Notices: Supported 00:28:17.389 Firmware Activation Notices: Not Supported 00:28:17.389 ANA Change Notices: Not Supported 00:28:17.389 PLE Aggregate Log Change Notices: Not Supported 00:28:17.389 LBA Status Info Alert Notices: Not Supported 00:28:17.389 EGE Aggregate Log Change Notices: Not Supported 00:28:17.389 Normal NVM Subsystem Shutdown event: Not Supported 00:28:17.389 Zone Descriptor Change Notices: Not Supported 00:28:17.389 Discovery Log Change Notices: Not Supported 00:28:17.389 Controller Attributes 00:28:17.389 128-bit Host Identifier: Supported 00:28:17.389 Non-Operational Permissive Mode: Not Supported 00:28:17.389 NVM Sets: Not Supported 00:28:17.389 Read Recovery Levels: Not Supported 00:28:17.389 Endurance Groups: Not Supported 00:28:17.389 Predictable Latency Mode: Not Supported 00:28:17.389 Traffic Based Keep ALive: Not Supported 00:28:17.389 Namespace Granularity: Not Supported 00:28:17.389 SQ Associations: Not Supported 00:28:17.389 UUID List: Not Supported 00:28:17.389 Multi-Domain Subsystem: Not Supported 00:28:17.389 Fixed Capacity Management: Not Supported 00:28:17.389 Variable Capacity Management: Not Supported 00:28:17.389 Delete Endurance Group: Not Supported 00:28:17.389 Delete NVM Set: Not Supported 00:28:17.389 Extended LBA Formats Supported: Not Supported 00:28:17.389 Flexible Data Placement Supported: Not Supported 00:28:17.389 00:28:17.389 Controller Memory Buffer Support 00:28:17.389 ================================ 00:28:17.389 Supported: No 00:28:17.389 00:28:17.389 Persistent Memory Region Support 00:28:17.389 ================================ 00:28:17.389 Supported: No 00:28:17.389 00:28:17.389 Admin Command Set Attributes 00:28:17.389 ============================ 00:28:17.389 Security Send/Receive: Not Supported 00:28:17.389 Format NVM: Not Supported 00:28:17.389 Firmware Activate/Download: Not Supported 00:28:17.389 Namespace Management: Not Supported 00:28:17.389 Device Self-Test: Not Supported 00:28:17.389 Directives: Not Supported 00:28:17.389 NVMe-MI: Not Supported 00:28:17.389 Virtualization Management: Not Supported 00:28:17.389 Doorbell Buffer Config: Not Supported 00:28:17.389 Get LBA Status Capability: Not Supported 00:28:17.389 Command & Feature Lockdown Capability: Not Supported 00:28:17.389 Abort Command Limit: 4 00:28:17.389 Async Event Request Limit: 4 00:28:17.389 Number of Firmware Slots: N/A 00:28:17.389 Firmware Slot 1 Read-Only: N/A 00:28:17.389 Firmware Activation Without Reset: N/A 00:28:17.389 Multiple Update Detection Support: N/A 00:28:17.389 Firmware Update Granularity: No Information Provided 00:28:17.389 Per-Namespace SMART Log: No 00:28:17.389 Asymmetric Namespace Access Log Page: Not Supported 00:28:17.389 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:28:17.389 Command Effects Log Page: Supported 00:28:17.389 Get Log Page Extended Data: Supported 00:28:17.389 Telemetry Log Pages: Not Supported 00:28:17.389 Persistent Event Log Pages: Not Supported 00:28:17.389 Supported Log Pages Log Page: May Support 00:28:17.389 Commands Supported & Effects Log Page: Not Supported 00:28:17.389 Feature Identifiers & Effects Log Page:May Support 00:28:17.389 NVMe-MI Commands & Effects Log Page: May Support 00:28:17.389 Data Area 4 for Telemetry Log: Not Supported 00:28:17.389 Error Log Page Entries Supported: 128 00:28:17.389 Keep Alive: Supported 00:28:17.389 Keep Alive Granularity: 10000 ms 00:28:17.389 00:28:17.389 NVM Command Set Attributes 00:28:17.389 ========================== 00:28:17.389 Submission Queue Entry Size 00:28:17.389 Max: 64 00:28:17.389 Min: 64 00:28:17.389 Completion Queue Entry Size 00:28:17.389 Max: 16 00:28:17.389 Min: 16 00:28:17.389 Number of Namespaces: 32 00:28:17.389 Compare Command: Supported 00:28:17.389 Write Uncorrectable Command: Not Supported 00:28:17.389 Dataset Management Command: Supported 00:28:17.389 Write Zeroes Command: Supported 00:28:17.389 Set Features Save Field: Not Supported 00:28:17.389 Reservations: Not Supported 00:28:17.389 Timestamp: Not Supported 00:28:17.389 Copy: Supported 00:28:17.389 Volatile Write Cache: Present 00:28:17.389 Atomic Write Unit (Normal): 1 00:28:17.389 Atomic Write Unit (PFail): 1 00:28:17.389 Atomic Compare & Write Unit: 1 00:28:17.389 Fused Compare & Write: Supported 00:28:17.389 Scatter-Gather List 00:28:17.389 SGL Command Set: Supported (Dword aligned) 00:28:17.389 SGL Keyed: Not Supported 00:28:17.389 SGL Bit Bucket Descriptor: Not Supported 00:28:17.389 SGL Metadata Pointer: Not Supported 00:28:17.389 Oversized SGL: Not Supported 00:28:17.389 SGL Metadata Address: Not Supported 00:28:17.389 SGL Offset: Not Supported 00:28:17.389 Transport SGL Data Block: Not Supported 00:28:17.389 Replay Protected Memory Block: Not Supported 00:28:17.389 00:28:17.389 Firmware Slot Information 00:28:17.389 ========================= 00:28:17.389 Active slot: 1 00:28:17.389 Slot 1 Firmware Revision: 25.01 00:28:17.389 00:28:17.389 00:28:17.389 Commands Supported and Effects 00:28:17.389 ============================== 00:28:17.389 Admin Commands 00:28:17.389 -------------- 00:28:17.389 Get Log Page (02h): Supported 00:28:17.389 Identify (06h): Supported 00:28:17.389 Abort (08h): Supported 00:28:17.389 Set Features (09h): Supported 00:28:17.389 Get Features (0Ah): Supported 00:28:17.389 Asynchronous Event Request (0Ch): Supported 00:28:17.389 Keep Alive (18h): Supported 00:28:17.389 I/O Commands 00:28:17.389 ------------ 00:28:17.389 Flush (00h): Supported LBA-Change 00:28:17.389 Write (01h): Supported LBA-Change 00:28:17.389 Read (02h): Supported 00:28:17.389 Compare (05h): Supported 00:28:17.389 Write Zeroes (08h): Supported LBA-Change 00:28:17.389 Dataset Management (09h): Supported LBA-Change 00:28:17.389 Copy (19h): Supported LBA-Change 00:28:17.390 00:28:17.390 Error Log 00:28:17.390 ========= 00:28:17.390 00:28:17.390 Arbitration 00:28:17.390 =========== 00:28:17.390 Arbitration Burst: 1 00:28:17.390 00:28:17.390 Power Management 00:28:17.390 ================ 00:28:17.390 Number of Power States: 1 00:28:17.390 Current Power State: Power State #0 00:28:17.390 Power State #0: 00:28:17.390 Max Power: 0.00 W 00:28:17.390 Non-Operational State: Operational 00:28:17.390 Entry Latency: Not Reported 00:28:17.390 Exit Latency: Not Reported 00:28:17.390 Relative Read Throughput: 0 00:28:17.390 Relative Read Latency: 0 00:28:17.390 Relative Write Throughput: 0 00:28:17.390 Relative Write Latency: 0 00:28:17.390 Idle Power: Not Reported 00:28:17.390 Active Power: Not Reported 00:28:17.390 Non-Operational Permissive Mode: Not Supported 00:28:17.390 00:28:17.390 Health Information 00:28:17.390 ================== 00:28:17.390 Critical Warnings: 00:28:17.390 Available Spare Space: OK 00:28:17.390 Temperature: OK 00:28:17.390 Device Reliability: OK 00:28:17.390 Read Only: No 00:28:17.390 Volatile Memory Backup: OK 00:28:17.390 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:17.390 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:17.390 Available Spare: 0% 00:28:17.390 Available Sp[2024-12-09 10:39:18.453125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:28:17.390 [2024-12-09 10:39:18.461004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:28:17.390 [2024-12-09 10:39:18.461035] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:28:17.390 [2024-12-09 10:39:18.461044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.390 [2024-12-09 10:39:18.461049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.390 [2024-12-09 10:39:18.461055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.390 [2024-12-09 10:39:18.461060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.390 [2024-12-09 10:39:18.461117] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:28:17.390 [2024-12-09 10:39:18.461128] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:28:17.390 [2024-12-09 10:39:18.462125] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:28:17.390 [2024-12-09 10:39:18.462169] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:28:17.390 [2024-12-09 10:39:18.462175] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:28:17.390 [2024-12-09 10:39:18.463134] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:28:17.390 [2024-12-09 10:39:18.463145] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:28:17.390 [2024-12-09 10:39:18.463190] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:28:17.390 [2024-12-09 10:39:18.464344] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:28:17.650 are Threshold: 0% 00:28:17.650 Life Percentage Used: 0% 00:28:17.650 Data Units Read: 0 00:28:17.650 Data Units Written: 0 00:28:17.650 Host Read Commands: 0 00:28:17.650 Host Write Commands: 0 00:28:17.650 Controller Busy Time: 0 minutes 00:28:17.650 Power Cycles: 0 00:28:17.650 Power On Hours: 0 hours 00:28:17.650 Unsafe Shutdowns: 0 00:28:17.650 Unrecoverable Media Errors: 0 00:28:17.650 Lifetime Error Log Entries: 0 00:28:17.650 Warning Temperature Time: 0 minutes 00:28:17.650 Critical Temperature Time: 0 minutes 00:28:17.650 00:28:17.650 Number of Queues 00:28:17.650 ================ 00:28:17.650 Number of I/O Submission Queues: 127 00:28:17.650 Number of I/O Completion Queues: 127 00:28:17.650 00:28:17.650 Active Namespaces 00:28:17.650 ================= 00:28:17.650 Namespace ID:1 00:28:17.650 Error Recovery Timeout: Unlimited 00:28:17.650 Command Set Identifier: NVM (00h) 00:28:17.650 Deallocate: Supported 00:28:17.650 Deallocated/Unwritten Error: Not Supported 00:28:17.650 Deallocated Read Value: Unknown 00:28:17.650 Deallocate in Write Zeroes: Not Supported 00:28:17.650 Deallocated Guard Field: 0xFFFF 00:28:17.650 Flush: Supported 00:28:17.650 Reservation: Supported 00:28:17.650 Namespace Sharing Capabilities: Multiple Controllers 00:28:17.650 Size (in LBAs): 131072 (0GiB) 00:28:17.650 Capacity (in LBAs): 131072 (0GiB) 00:28:17.650 Utilization (in LBAs): 131072 (0GiB) 00:28:17.650 NGUID: 09B9DD07693C4DDA8B75729F16A24846 00:28:17.650 UUID: 09b9dd07-693c-4dda-8b75-729f16a24846 00:28:17.650 Thin Provisioning: Not Supported 00:28:17.650 Per-NS Atomic Units: Yes 00:28:17.650 Atomic Boundary Size (Normal): 0 00:28:17.650 Atomic Boundary Size (PFail): 0 00:28:17.650 Atomic Boundary Offset: 0 00:28:17.650 Maximum Single Source Range Length: 65535 00:28:17.650 Maximum Copy Length: 65535 00:28:17.650 Maximum Source Range Count: 1 00:28:17.650 NGUID/EUI64 Never Reused: No 00:28:17.650 Namespace Write Protected: No 00:28:17.650 Number of LBA Formats: 1 00:28:17.650 Current LBA Format: LBA Format #00 00:28:17.650 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:17.650 00:28:17.650 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:28:17.650 [2024-12-09 10:39:18.788124] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:28:22.924 Initializing NVMe Controllers 00:28:22.924 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:28:22.924 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:28:22.924 Initialization complete. Launching workers. 00:28:22.924 ======================================================== 00:28:22.924 Latency(us) 00:28:22.924 Device Information : IOPS MiB/s Average min max 00:28:22.924 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39830.84 155.59 3213.21 1013.76 7671.27 00:28:22.924 ======================================================== 00:28:22.924 Total : 39830.84 155.59 3213.21 1013.76 7671.27 00:28:22.924 00:28:22.924 [2024-12-09 10:39:23.892267] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:28:22.924 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:28:23.184 [2024-12-09 10:39:24.216248] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:28:28.471 Initializing NVMe Controllers 00:28:28.471 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:28:28.471 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:28:28.471 Initialization complete. Launching workers. 00:28:28.471 ======================================================== 00:28:28.471 Latency(us) 00:28:28.471 Device Information : IOPS MiB/s Average min max 00:28:28.471 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39957.95 156.09 3202.96 1000.78 6583.91 00:28:28.471 ======================================================== 00:28:28.471 Total : 39957.95 156.09 3202.96 1000.78 6583.91 00:28:28.471 00:28:28.471 [2024-12-09 10:39:29.237193] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:28:28.471 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:28:28.471 [2024-12-09 10:39:29.536757] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:28:33.750 [2024-12-09 10:39:34.665100] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:28:33.750 Initializing NVMe Controllers 00:28:33.750 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:28:33.750 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:28:33.750 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:28:33.750 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:28:33.750 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:28:33.750 Initialization complete. Launching workers. 00:28:33.750 Starting thread on core 2 00:28:33.750 Starting thread on core 3 00:28:33.750 Starting thread on core 1 00:28:33.750 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:28:34.108 [2024-12-09 10:39:35.064470] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:28:37.399 [2024-12-09 10:39:38.134119] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:28:37.399 Initializing NVMe Controllers 00:28:37.399 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:28:37.399 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:28:37.399 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:28:37.399 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:28:37.399 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:28:37.399 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:28:37.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:28:37.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:28:37.399 Initialization complete. Launching workers. 00:28:37.399 Starting thread on core 1 with urgent priority queue 00:28:37.399 Starting thread on core 2 with urgent priority queue 00:28:37.399 Starting thread on core 3 with urgent priority queue 00:28:37.399 Starting thread on core 0 with urgent priority queue 00:28:37.399 SPDK bdev Controller (SPDK2 ) core 0: 7519.33 IO/s 13.30 secs/100000 ios 00:28:37.399 SPDK bdev Controller (SPDK2 ) core 1: 7570.33 IO/s 13.21 secs/100000 ios 00:28:37.399 SPDK bdev Controller (SPDK2 ) core 2: 10196.67 IO/s 9.81 secs/100000 ios 00:28:37.399 SPDK bdev Controller (SPDK2 ) core 3: 8741.67 IO/s 11.44 secs/100000 ios 00:28:37.399 ======================================================== 00:28:37.399 00:28:37.399 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:28:37.399 [2024-12-09 10:39:38.515270] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:28:37.399 Initializing NVMe Controllers 00:28:37.399 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:28:37.399 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:28:37.399 Namespace ID: 1 size: 0GB 00:28:37.399 Initialization complete. 00:28:37.399 INFO: using host memory buffer for IO 00:28:37.399 Hello world! 00:28:37.399 [2024-12-09 10:39:38.526350] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:28:37.657 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:28:37.915 [2024-12-09 10:39:38.898212] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:28:38.861 Initializing NVMe Controllers 00:28:38.861 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:28:38.861 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:28:38.861 Initialization complete. Launching workers. 00:28:38.861 submit (in ns) avg, min, max = 7184.1, 3210.4, 4001205.2 00:28:38.861 complete (in ns) avg, min, max = 19112.8, 1820.9, 4995969.6 00:28:38.861 00:28:38.861 Submit histogram 00:28:38.861 ================ 00:28:38.861 Range in us Cumulative Count 00:28:38.861 3.200 - 3.214: 0.0062% ( 1) 00:28:38.861 3.214 - 3.228: 0.0435% ( 6) 00:28:38.861 3.228 - 3.242: 0.0746% ( 5) 00:28:38.861 3.242 - 3.256: 0.1306% ( 9) 00:28:38.861 3.256 - 3.270: 0.1804% ( 8) 00:28:38.861 3.270 - 3.283: 0.5971% ( 67) 00:28:38.861 3.283 - 3.297: 2.6995% ( 338) 00:28:38.861 3.297 - 3.311: 7.2650% ( 734) 00:28:38.861 3.311 - 3.325: 12.5832% ( 855) 00:28:38.861 3.325 - 3.339: 18.1750% ( 899) 00:28:38.861 3.339 - 3.353: 23.8291% ( 909) 00:28:38.861 3.353 - 3.367: 29.3836% ( 893) 00:28:38.861 3.367 - 3.381: 34.2788% ( 787) 00:28:38.861 3.381 - 3.395: 39.8209% ( 891) 00:28:38.861 3.395 - 3.409: 44.3366% ( 726) 00:28:38.861 3.409 - 3.423: 48.4046% ( 654) 00:28:38.861 3.423 - 3.437: 52.2175% ( 613) 00:28:38.861 3.437 - 3.450: 58.6055% ( 1027) 00:28:38.861 3.450 - 3.464: 65.1801% ( 1057) 00:28:38.861 3.464 - 3.478: 69.9820% ( 772) 00:28:38.861 3.478 - 3.492: 74.9767% ( 803) 00:28:38.861 3.492 - 3.506: 79.2996% ( 695) 00:28:38.861 3.506 - 3.520: 82.5714% ( 526) 00:28:38.861 3.520 - 3.534: 84.6986% ( 342) 00:28:38.861 3.534 - 3.548: 85.7996% ( 177) 00:28:38.861 3.548 - 3.562: 86.7139% ( 147) 00:28:38.861 3.562 - 3.590: 87.7900% ( 173) 00:28:38.861 3.590 - 3.617: 89.2331% ( 232) 00:28:38.861 3.617 - 3.645: 90.9125% ( 270) 00:28:38.861 3.645 - 3.673: 92.4675% ( 250) 00:28:38.861 3.673 - 3.701: 94.0909% ( 261) 00:28:38.861 3.701 - 3.729: 95.9632% ( 301) 00:28:38.861 3.729 - 3.757: 97.4809% ( 244) 00:28:38.861 3.757 - 3.784: 98.3206% ( 135) 00:28:38.861 3.784 - 3.812: 98.9053% ( 94) 00:28:38.861 3.812 - 3.840: 99.3220% ( 67) 00:28:38.861 3.840 - 3.868: 99.4962% ( 28) 00:28:38.861 3.868 - 3.896: 99.5646% ( 11) 00:28:38.861 3.896 - 3.923: 99.5957% ( 5) 00:28:38.861 3.923 - 3.951: 99.6081% ( 2) 00:28:38.861 3.951 - 3.979: 99.6144% ( 1) 00:28:38.861 4.090 - 4.118: 99.6268% ( 2) 00:28:38.861 4.118 - 4.146: 99.6330% ( 1) 00:28:38.861 4.230 - 4.257: 99.6392% ( 1) 00:28:38.861 5.454 - 5.482: 99.6455% ( 1) 00:28:38.861 5.482 - 5.510: 99.6517% ( 1) 00:28:38.861 5.593 - 5.621: 99.6579% ( 1) 00:28:38.861 5.843 - 5.871: 99.6641% ( 1) 00:28:38.861 5.983 - 6.010: 99.6703% ( 1) 00:28:38.861 6.372 - 6.400: 99.6766% ( 1) 00:28:38.861 6.511 - 6.539: 99.6828% ( 1) 00:28:38.861 6.539 - 6.567: 99.6890% ( 1) 00:28:38.861 6.623 - 6.650: 99.6952% ( 1) 00:28:38.861 6.678 - 6.706: 99.7014% ( 1) 00:28:38.861 6.873 - 6.901: 99.7077% ( 1) 00:28:38.861 7.068 - 7.096: 99.7139% ( 1) 00:28:38.861 7.235 - 7.290: 99.7263% ( 2) 00:28:38.861 7.346 - 7.402: 99.7388% ( 2) 00:28:38.861 7.402 - 7.457: 99.7512% ( 2) 00:28:38.861 7.680 - 7.736: 99.7636% ( 2) 00:28:38.861 7.736 - 7.791: 99.7699% ( 1) 00:28:38.861 7.847 - 7.903: 99.8010% ( 5) 00:28:38.861 7.903 - 7.958: 99.8134% ( 2) 00:28:38.861 8.014 - 8.070: 99.8321% ( 3) 00:28:38.861 8.125 - 8.181: 99.8383% ( 1) 00:28:38.861 8.292 - 8.348: 99.8445% ( 1) 00:28:38.861 8.515 - 8.570: 99.8507% ( 1) 00:28:38.861 8.570 - 8.626: 99.8569% ( 1) 00:28:38.861 9.016 - 9.071: 99.8632% ( 1) 00:28:38.861 9.238 - 9.294: 99.8694% ( 1) 00:28:38.861 9.294 - 9.350: 99.8756% ( 1) 00:28:38.861 9.683 - 9.739: 99.8818% ( 1) 00:28:38.861 11.576 - 11.631: 99.8880% ( 1) 00:28:38.861 11.965 - 12.021: 99.8943% ( 1) 00:28:38.861 13.802 - 13.857: 99.9005% ( 1) 00:28:38.861 19.367 - 19.478: 99.9067% ( 1) 00:28:38.861 3989.148 - 4017.642: 100.0000% ( 15) 00:28:38.861 00:28:38.861 Complete histogram 00:28:38.861 ================== 00:28:38.861 Range in us Cumulative Count 00:28:38.861 1.809 - [2024-12-09 10:39:40.000079] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:28:39.124 1.823: 0.0062% ( 1) 00:28:39.124 1.823 - 1.837: 0.0373% ( 5) 00:28:39.124 1.837 - 1.850: 0.0871% ( 8) 00:28:39.124 1.850 - 1.864: 0.1057% ( 3) 00:28:39.124 1.864 - 1.878: 0.4230% ( 51) 00:28:39.124 1.878 - 1.892: 6.8981% ( 1041) 00:28:39.124 1.892 - 1.906: 18.0755% ( 1797) 00:28:39.124 1.906 - 1.920: 23.6052% ( 889) 00:28:39.124 1.920 - 1.934: 25.9004% ( 369) 00:28:39.124 1.934 - 1.948: 41.0400% ( 2434) 00:28:39.124 1.948 - 1.962: 75.4681% ( 5535) 00:28:39.124 1.962 - 1.976: 94.1283% ( 3000) 00:28:39.124 1.976 - 1.990: 97.9225% ( 610) 00:28:39.124 1.990 - 2.003: 98.9301% ( 162) 00:28:39.124 2.003 - 2.017: 99.1043% ( 28) 00:28:39.124 2.017 - 2.031: 99.1541% ( 8) 00:28:39.124 2.031 - 2.045: 99.1790% ( 4) 00:28:39.124 2.045 - 2.059: 99.2225% ( 7) 00:28:39.124 2.059 - 2.073: 99.2287% ( 1) 00:28:39.124 2.073 - 2.087: 99.2474% ( 3) 00:28:39.124 2.087 - 2.101: 99.2598% ( 2) 00:28:39.124 2.115 - 2.129: 99.2660% ( 1) 00:28:39.124 2.129 - 2.143: 99.2723% ( 1) 00:28:39.124 2.170 - 2.184: 99.2785% ( 1) 00:28:39.124 2.184 - 2.198: 99.2847% ( 1) 00:28:39.124 2.212 - 2.226: 99.2909% ( 1) 00:28:39.124 2.254 - 2.268: 99.2971% ( 1) 00:28:39.124 2.282 - 2.296: 99.3034% ( 1) 00:28:39.124 2.296 - 2.310: 99.3096% ( 1) 00:28:39.124 2.337 - 2.351: 99.3158% ( 1) 00:28:39.124 2.365 - 2.379: 99.3220% ( 1) 00:28:39.124 2.435 - 2.449: 99.3282% ( 1) 00:28:39.124 2.727 - 2.741: 99.3345% ( 1) 00:28:39.124 3.812 - 3.840: 99.3407% ( 1) 00:28:39.124 3.840 - 3.868: 99.3469% ( 1) 00:28:39.124 3.951 - 3.979: 99.3531% ( 1) 00:28:39.124 4.146 - 4.174: 99.3593% ( 1) 00:28:39.124 4.230 - 4.257: 99.3656% ( 1) 00:28:39.125 4.730 - 4.758: 99.3718% ( 1) 00:28:39.125 4.897 - 4.925: 99.3780% ( 1) 00:28:39.125 5.120 - 5.148: 99.3842% ( 1) 00:28:39.125 5.231 - 5.259: 99.3904% ( 1) 00:28:39.125 5.259 - 5.287: 99.3967% ( 1) 00:28:39.125 5.398 - 5.426: 99.4029% ( 1) 00:28:39.125 5.537 - 5.565: 99.4091% ( 1) 00:28:39.125 5.565 - 5.593: 99.4153% ( 1) 00:28:39.125 5.593 - 5.621: 99.4215% ( 1) 00:28:39.125 5.899 - 5.927: 99.4278% ( 1) 00:28:39.125 6.066 - 6.094: 99.4340% ( 1) 00:28:39.125 6.177 - 6.205: 99.4402% ( 1) 00:28:39.125 6.205 - 6.233: 99.4464% ( 1) 00:28:39.125 6.233 - 6.261: 99.4526% ( 1) 00:28:39.125 6.317 - 6.344: 99.4589% ( 1) 00:28:39.125 6.372 - 6.400: 99.4651% ( 1) 00:28:39.125 6.511 - 6.539: 99.4713% ( 1) 00:28:39.125 6.539 - 6.567: 99.4775% ( 1) 00:28:39.125 6.595 - 6.623: 99.4837% ( 1) 00:28:39.125 6.706 - 6.734: 99.4900% ( 1) 00:28:39.125 6.929 - 6.957: 99.5024% ( 2) 00:28:39.125 6.957 - 6.984: 99.5086% ( 1) 00:28:39.125 7.290 - 7.346: 99.5148% ( 1) 00:28:39.125 7.569 - 7.624: 99.5211% ( 1) 00:28:39.125 7.791 - 7.847: 99.5273% ( 1) 00:28:39.125 8.348 - 8.403: 99.5335% ( 1) 00:28:39.125 8.570 - 8.626: 99.5397% ( 1) 00:28:39.125 8.682 - 8.737: 99.5459% ( 1) 00:28:39.125 9.350 - 9.405: 99.5522% ( 1) 00:28:39.125 10.630 - 10.685: 99.5584% ( 1) 00:28:39.125 12.967 - 13.023: 99.5646% ( 1) 00:28:39.125 13.969 - 14.024: 99.5708% ( 1) 00:28:39.125 3162.824 - 3177.071: 99.5770% ( 1) 00:28:39.125 3989.148 - 4017.642: 99.9938% ( 67) 00:28:39.125 4986.435 - 5014.929: 100.0000% ( 1) 00:28:39.125 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:28:39.125 [ 00:28:39.125 { 00:28:39.125 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:39.125 "subtype": "Discovery", 00:28:39.125 "listen_addresses": [], 00:28:39.125 "allow_any_host": true, 00:28:39.125 "hosts": [] 00:28:39.125 }, 00:28:39.125 { 00:28:39.125 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:28:39.125 "subtype": "NVMe", 00:28:39.125 "listen_addresses": [ 00:28:39.125 { 00:28:39.125 "trtype": "VFIOUSER", 00:28:39.125 "adrfam": "IPv4", 00:28:39.125 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:28:39.125 "trsvcid": "0" 00:28:39.125 } 00:28:39.125 ], 00:28:39.125 "allow_any_host": true, 00:28:39.125 "hosts": [], 00:28:39.125 "serial_number": "SPDK1", 00:28:39.125 "model_number": "SPDK bdev Controller", 00:28:39.125 "max_namespaces": 32, 00:28:39.125 "min_cntlid": 1, 00:28:39.125 "max_cntlid": 65519, 00:28:39.125 "namespaces": [ 00:28:39.125 { 00:28:39.125 "nsid": 1, 00:28:39.125 "bdev_name": "Malloc1", 00:28:39.125 "name": "Malloc1", 00:28:39.125 "nguid": "957872BC226C4C37BC0984DB8FD75011", 00:28:39.125 "uuid": "957872bc-226c-4c37-bc09-84db8fd75011" 00:28:39.125 }, 00:28:39.125 { 00:28:39.125 "nsid": 2, 00:28:39.125 "bdev_name": "Malloc3", 00:28:39.125 "name": "Malloc3", 00:28:39.125 "nguid": "E3DEB32CE854450D820490A5E3F80089", 00:28:39.125 "uuid": "e3deb32c-e854-450d-8204-90a5e3f80089" 00:28:39.125 } 00:28:39.125 ] 00:28:39.125 }, 00:28:39.125 { 00:28:39.125 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:28:39.125 "subtype": "NVMe", 00:28:39.125 "listen_addresses": [ 00:28:39.125 { 00:28:39.125 "trtype": "VFIOUSER", 00:28:39.125 "adrfam": "IPv4", 00:28:39.125 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:28:39.125 "trsvcid": "0" 00:28:39.125 } 00:28:39.125 ], 00:28:39.125 "allow_any_host": true, 00:28:39.125 "hosts": [], 00:28:39.125 "serial_number": "SPDK2", 00:28:39.125 "model_number": "SPDK bdev Controller", 00:28:39.125 "max_namespaces": 32, 00:28:39.125 "min_cntlid": 1, 00:28:39.125 "max_cntlid": 65519, 00:28:39.125 "namespaces": [ 00:28:39.125 { 00:28:39.125 "nsid": 1, 00:28:39.125 "bdev_name": "Malloc2", 00:28:39.125 "name": "Malloc2", 00:28:39.125 "nguid": "09B9DD07693C4DDA8B75729F16A24846", 00:28:39.125 "uuid": "09b9dd07-693c-4dda-8b75-729f16a24846" 00:28:39.125 } 00:28:39.125 ] 00:28:39.125 } 00:28:39.125 ] 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=632080 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:28:39.125 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:28:39.383 [2024-12-09 10:39:40.422513] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:28:39.383 Malloc4 00:28:39.383 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:28:39.641 [2024-12-09 10:39:40.647155] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:28:39.641 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:28:39.641 Asynchronous Event Request test 00:28:39.641 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:28:39.641 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:28:39.641 Registering asynchronous event callbacks... 00:28:39.641 Starting namespace attribute notice tests for all controllers... 00:28:39.641 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:39.641 aer_cb - Changed Namespace 00:28:39.641 Cleaning up... 00:28:39.900 [ 00:28:39.900 { 00:28:39.900 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:39.900 "subtype": "Discovery", 00:28:39.900 "listen_addresses": [], 00:28:39.900 "allow_any_host": true, 00:28:39.900 "hosts": [] 00:28:39.900 }, 00:28:39.900 { 00:28:39.900 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:28:39.900 "subtype": "NVMe", 00:28:39.900 "listen_addresses": [ 00:28:39.900 { 00:28:39.900 "trtype": "VFIOUSER", 00:28:39.900 "adrfam": "IPv4", 00:28:39.900 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:28:39.900 "trsvcid": "0" 00:28:39.900 } 00:28:39.900 ], 00:28:39.900 "allow_any_host": true, 00:28:39.900 "hosts": [], 00:28:39.900 "serial_number": "SPDK1", 00:28:39.900 "model_number": "SPDK bdev Controller", 00:28:39.900 "max_namespaces": 32, 00:28:39.900 "min_cntlid": 1, 00:28:39.900 "max_cntlid": 65519, 00:28:39.900 "namespaces": [ 00:28:39.900 { 00:28:39.900 "nsid": 1, 00:28:39.900 "bdev_name": "Malloc1", 00:28:39.900 "name": "Malloc1", 00:28:39.900 "nguid": "957872BC226C4C37BC0984DB8FD75011", 00:28:39.900 "uuid": "957872bc-226c-4c37-bc09-84db8fd75011" 00:28:39.900 }, 00:28:39.900 { 00:28:39.900 "nsid": 2, 00:28:39.900 "bdev_name": "Malloc3", 00:28:39.901 "name": "Malloc3", 00:28:39.901 "nguid": "E3DEB32CE854450D820490A5E3F80089", 00:28:39.901 "uuid": "e3deb32c-e854-450d-8204-90a5e3f80089" 00:28:39.901 } 00:28:39.901 ] 00:28:39.901 }, 00:28:39.901 { 00:28:39.901 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:28:39.901 "subtype": "NVMe", 00:28:39.901 "listen_addresses": [ 00:28:39.901 { 00:28:39.901 "trtype": "VFIOUSER", 00:28:39.901 "adrfam": "IPv4", 00:28:39.901 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:28:39.901 "trsvcid": "0" 00:28:39.901 } 00:28:39.901 ], 00:28:39.901 "allow_any_host": true, 00:28:39.901 "hosts": [], 00:28:39.901 "serial_number": "SPDK2", 00:28:39.901 "model_number": "SPDK bdev Controller", 00:28:39.901 "max_namespaces": 32, 00:28:39.901 "min_cntlid": 1, 00:28:39.901 "max_cntlid": 65519, 00:28:39.901 "namespaces": [ 00:28:39.901 { 00:28:39.901 "nsid": 1, 00:28:39.901 "bdev_name": "Malloc2", 00:28:39.901 "name": "Malloc2", 00:28:39.901 "nguid": "09B9DD07693C4DDA8B75729F16A24846", 00:28:39.901 "uuid": "09b9dd07-693c-4dda-8b75-729f16a24846" 00:28:39.901 }, 00:28:39.901 { 00:28:39.901 "nsid": 2, 00:28:39.901 "bdev_name": "Malloc4", 00:28:39.901 "name": "Malloc4", 00:28:39.901 "nguid": "C9F4BF1C759B4A8A9187B121E0F4309C", 00:28:39.901 "uuid": "c9f4bf1c-759b-4a8a-9187-b121e0f4309c" 00:28:39.901 } 00:28:39.901 ] 00:28:39.901 } 00:28:39.901 ] 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 632080 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 623704 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 623704 ']' 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 623704 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 623704 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 623704' 00:28:39.901 killing process with pid 623704 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 623704 00:28:39.901 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 623704 00:28:40.159 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:28:40.159 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:28:40.159 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:28:40.159 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:28:40.159 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=632242 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 632242' 00:28:40.160 Process pid: 632242 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 632242 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 632242 ']' 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.160 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:28:40.160 [2024-12-09 10:39:41.254389] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:40.160 [2024-12-09 10:39:41.255308] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:28:40.160 [2024-12-09 10:39:41.255350] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.160 [2024-12-09 10:39:41.321545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:40.419 [2024-12-09 10:39:41.361542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.419 [2024-12-09 10:39:41.361580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.419 [2024-12-09 10:39:41.361588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.419 [2024-12-09 10:39:41.361595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.419 [2024-12-09 10:39:41.361600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.419 [2024-12-09 10:39:41.363058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.419 [2024-12-09 10:39:41.363155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.419 [2024-12-09 10:39:41.363265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.419 [2024-12-09 10:39:41.363266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.419 [2024-12-09 10:39:41.431748] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:40.419 [2024-12-09 10:39:41.431853] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:40.419 [2024-12-09 10:39:41.431967] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:40.419 [2024-12-09 10:39:41.432242] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:40.419 [2024-12-09 10:39:41.432411] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:40.419 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.419 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:28:40.419 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:28:41.356 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:28:41.617 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:28:41.617 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:28:41.617 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:28:41.617 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:28:41.617 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:41.876 Malloc1 00:28:41.876 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:28:42.135 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:28:42.135 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:28:42.394 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:28:42.394 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:28:42.394 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:28:42.653 Malloc2 00:28:42.653 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:28:42.912 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:28:43.171 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:28:43.171 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:28:43.171 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 632242 00:28:43.171 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 632242 ']' 00:28:43.171 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 632242 00:28:43.171 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:28:43.171 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.171 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 632242 00:28:43.431 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.431 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.431 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 632242' 00:28:43.431 killing process with pid 632242 00:28:43.431 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 632242 00:28:43.431 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 632242 00:28:43.431 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:28:43.431 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:28:43.431 00:28:43.431 real 0m52.033s 00:28:43.431 user 3m21.411s 00:28:43.431 sys 0m3.218s 00:28:43.431 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.431 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:28:43.431 ************************************ 00:28:43.431 END TEST nvmf_vfio_user 00:28:43.431 ************************************ 00:28:43.692 10:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:28:43.692 10:39:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:43.693 ************************************ 00:28:43.693 START TEST nvmf_vfio_user_nvme_compliance 00:28:43.693 ************************************ 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:28:43.693 * Looking for test storage... 00:28:43.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:43.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.693 --rc genhtml_branch_coverage=1 00:28:43.693 --rc genhtml_function_coverage=1 00:28:43.693 --rc genhtml_legend=1 00:28:43.693 --rc geninfo_all_blocks=1 00:28:43.693 --rc geninfo_unexecuted_blocks=1 00:28:43.693 00:28:43.693 ' 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:43.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.693 --rc genhtml_branch_coverage=1 00:28:43.693 --rc genhtml_function_coverage=1 00:28:43.693 --rc genhtml_legend=1 00:28:43.693 --rc geninfo_all_blocks=1 00:28:43.693 --rc geninfo_unexecuted_blocks=1 00:28:43.693 00:28:43.693 ' 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:43.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.693 --rc genhtml_branch_coverage=1 00:28:43.693 --rc genhtml_function_coverage=1 00:28:43.693 --rc genhtml_legend=1 00:28:43.693 --rc geninfo_all_blocks=1 00:28:43.693 --rc geninfo_unexecuted_blocks=1 00:28:43.693 00:28:43.693 ' 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:43.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.693 --rc genhtml_branch_coverage=1 00:28:43.693 --rc genhtml_function_coverage=1 00:28:43.693 --rc genhtml_legend=1 00:28:43.693 --rc geninfo_all_blocks=1 00:28:43.693 --rc geninfo_unexecuted_blocks=1 00:28:43.693 00:28:43.693 ' 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.693 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:43.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=632871 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 632871' 00:28:43.694 Process pid: 632871 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 632871 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 632871 ']' 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.694 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:28:43.954 [2024-12-09 10:39:44.897599] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:28:43.954 [2024-12-09 10:39:44.897645] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.954 [2024-12-09 10:39:44.960139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:43.954 [2024-12-09 10:39:44.999526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.954 [2024-12-09 10:39:44.999562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.954 [2024-12-09 10:39:44.999569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.954 [2024-12-09 10:39:44.999575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.954 [2024-12-09 10:39:44.999581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.954 [2024-12-09 10:39:45.000987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.954 [2024-12-09 10:39:45.001085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.954 [2024-12-09 10:39:45.001088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.954 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.954 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:28:43.954 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:28:45.331 malloc0 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.331 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:28:45.331 00:28:45.331 00:28:45.331 CUnit - A unit testing framework for C - Version 2.1-3 00:28:45.331 http://cunit.sourceforge.net/ 00:28:45.331 00:28:45.331 00:28:45.331 Suite: nvme_compliance 00:28:45.331 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 10:39:46.357210] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:45.331 [2024-12-09 10:39:46.358566] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:28:45.331 [2024-12-09 10:39:46.358580] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:28:45.331 [2024-12-09 10:39:46.358586] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:28:45.331 [2024-12-09 10:39:46.360234] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:45.331 passed 00:28:45.331 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 10:39:46.446817] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:45.331 [2024-12-09 10:39:46.449836] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:45.331 passed 00:28:45.590 Test: admin_identify_ns ...[2024-12-09 10:39:46.534082] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:45.590 [2024-12-09 10:39:46.596014] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:28:45.590 [2024-12-09 10:39:46.604007] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:28:45.590 [2024-12-09 10:39:46.625097] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:45.590 passed 00:28:45.590 Test: admin_get_features_mandatory_features ...[2024-12-09 10:39:46.705748] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:45.590 [2024-12-09 10:39:46.710773] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:45.590 passed 00:28:45.849 Test: admin_get_features_optional_features ...[2024-12-09 10:39:46.793317] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:45.849 [2024-12-09 10:39:46.796343] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:45.849 passed 00:28:45.849 Test: admin_set_features_number_of_queues ...[2024-12-09 10:39:46.880555] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:45.849 [2024-12-09 10:39:46.985089] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:45.849 passed 00:28:46.108 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 10:39:47.067429] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:46.108 [2024-12-09 10:39:47.070453] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:46.108 passed 00:28:46.108 Test: admin_get_log_page_with_lpo ...[2024-12-09 10:39:47.150507] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:46.108 [2024-12-09 10:39:47.222006] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:28:46.108 [2024-12-09 10:39:47.235053] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:46.108 passed 00:28:46.366 Test: fabric_property_get ...[2024-12-09 10:39:47.317438] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:46.366 [2024-12-09 10:39:47.318674] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:28:46.366 [2024-12-09 10:39:47.320457] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:46.366 passed 00:28:46.366 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 10:39:47.402980] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:46.366 [2024-12-09 10:39:47.404225] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:28:46.366 [2024-12-09 10:39:47.406003] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:46.366 passed 00:28:46.367 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 10:39:47.488120] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:46.625 [2024-12-09 10:39:47.573029] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:28:46.625 [2024-12-09 10:39:47.589009] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:28:46.625 [2024-12-09 10:39:47.594080] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:46.625 passed 00:28:46.625 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 10:39:47.678170] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:46.625 [2024-12-09 10:39:47.679406] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:28:46.625 [2024-12-09 10:39:47.681198] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:46.625 passed 00:28:46.625 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 10:39:47.764308] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:46.883 [2024-12-09 10:39:47.840005] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:28:46.883 [2024-12-09 10:39:47.864005] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:28:46.883 [2024-12-09 10:39:47.869085] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:46.883 passed 00:28:46.883 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 10:39:47.952073] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:46.883 [2024-12-09 10:39:47.953321] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:28:46.883 [2024-12-09 10:39:47.953346] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:28:46.883 [2024-12-09 10:39:47.955097] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:46.883 passed 00:28:46.883 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 10:39:48.037549] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:47.142 [2024-12-09 10:39:48.130008] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:28:47.142 [2024-12-09 10:39:48.138005] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:28:47.142 [2024-12-09 10:39:48.146005] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:28:47.142 [2024-12-09 10:39:48.154004] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:28:47.142 [2024-12-09 10:39:48.183090] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:47.142 passed 00:28:47.142 Test: admin_create_io_sq_verify_pc ...[2024-12-09 10:39:48.266696] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:47.142 [2024-12-09 10:39:48.283015] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:28:47.142 [2024-12-09 10:39:48.300860] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:47.400 passed 00:28:47.400 Test: admin_create_io_qp_max_qps ...[2024-12-09 10:39:48.385448] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:48.335 [2024-12-09 10:39:49.490009] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:28:48.917 [2024-12-09 10:39:49.865311] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:48.917 passed 00:28:48.917 Test: admin_create_io_sq_shared_cq ...[2024-12-09 10:39:49.948676] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:28:48.917 [2024-12-09 10:39:50.080008] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:28:49.176 [2024-12-09 10:39:50.117074] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:28:49.176 passed 00:28:49.176 00:28:49.176 Run Summary: Type Total Ran Passed Failed Inactive 00:28:49.176 suites 1 1 n/a 0 0 00:28:49.176 tests 18 18 18 0 0 00:28:49.176 asserts 360 360 360 0 n/a 00:28:49.176 00:28:49.176 Elapsed time = 1.555 seconds 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 632871 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 632871 ']' 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 632871 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 632871 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 632871' 00:28:49.176 killing process with pid 632871 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 632871 00:28:49.176 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 632871 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:28:49.434 00:28:49.434 real 0m5.785s 00:28:49.434 user 0m16.234s 00:28:49.434 sys 0m0.498s 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:28:49.434 ************************************ 00:28:49.434 END TEST nvmf_vfio_user_nvme_compliance 00:28:49.434 ************************************ 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:49.434 ************************************ 00:28:49.434 START TEST nvmf_vfio_user_fuzz 00:28:49.434 ************************************ 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:28:49.434 * Looking for test storage... 00:28:49.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:28:49.434 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:49.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.693 --rc genhtml_branch_coverage=1 00:28:49.693 --rc genhtml_function_coverage=1 00:28:49.693 --rc genhtml_legend=1 00:28:49.693 --rc geninfo_all_blocks=1 00:28:49.693 --rc geninfo_unexecuted_blocks=1 00:28:49.693 00:28:49.693 ' 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:49.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.693 --rc genhtml_branch_coverage=1 00:28:49.693 --rc genhtml_function_coverage=1 00:28:49.693 --rc genhtml_legend=1 00:28:49.693 --rc geninfo_all_blocks=1 00:28:49.693 --rc geninfo_unexecuted_blocks=1 00:28:49.693 00:28:49.693 ' 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:49.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.693 --rc genhtml_branch_coverage=1 00:28:49.693 --rc genhtml_function_coverage=1 00:28:49.693 --rc genhtml_legend=1 00:28:49.693 --rc geninfo_all_blocks=1 00:28:49.693 --rc geninfo_unexecuted_blocks=1 00:28:49.693 00:28:49.693 ' 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:49.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.693 --rc genhtml_branch_coverage=1 00:28:49.693 --rc genhtml_function_coverage=1 00:28:49.693 --rc genhtml_legend=1 00:28:49.693 --rc geninfo_all_blocks=1 00:28:49.693 --rc geninfo_unexecuted_blocks=1 00:28:49.693 00:28:49.693 ' 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:28:49.693 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:49.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=633857 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 633857' 00:28:49.694 Process pid: 633857 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 633857 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 633857 ']' 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.694 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:49.952 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.952 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:28:49.952 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:28:50.886 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:28:50.886 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.886 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:50.886 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.886 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:28:50.886 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:28:50.886 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.886 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:50.886 malloc0 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:28:50.886 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:29:22.952 Fuzzing completed. Shutting down the fuzz application 00:29:22.952 00:29:22.952 Dumping successful admin opcodes: 00:29:22.952 9, 10, 00:29:22.952 Dumping successful io opcodes: 00:29:22.952 0, 00:29:22.952 NS: 0x20000081ef00 I/O qp, Total commands completed: 989551, total successful commands: 3876, random_seed: 1146984448 00:29:22.952 NS: 0x20000081ef00 admin qp, Total commands completed: 243552, total successful commands: 57, random_seed: 2720670720 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 633857 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 633857 ']' 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 633857 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633857 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633857' 00:29:22.952 killing process with pid 633857 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 633857 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 633857 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:29:22.952 00:29:22.952 real 0m32.302s 00:29:22.952 user 0m30.172s 00:29:22.952 sys 0m31.004s 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:29:22.952 ************************************ 00:29:22.952 END TEST nvmf_vfio_user_fuzz 00:29:22.952 ************************************ 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:22.952 ************************************ 00:29:22.952 START TEST nvmf_auth_target 00:29:22.952 ************************************ 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:29:22.952 * Looking for test storage... 00:29:22.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:29:22.952 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:22.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.952 --rc genhtml_branch_coverage=1 00:29:22.952 --rc genhtml_function_coverage=1 00:29:22.952 --rc genhtml_legend=1 00:29:22.952 --rc geninfo_all_blocks=1 00:29:22.952 --rc geninfo_unexecuted_blocks=1 00:29:22.952 00:29:22.952 ' 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:22.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.952 --rc genhtml_branch_coverage=1 00:29:22.952 --rc genhtml_function_coverage=1 00:29:22.952 --rc genhtml_legend=1 00:29:22.952 --rc geninfo_all_blocks=1 00:29:22.952 --rc geninfo_unexecuted_blocks=1 00:29:22.952 00:29:22.952 ' 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:22.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.952 --rc genhtml_branch_coverage=1 00:29:22.952 --rc genhtml_function_coverage=1 00:29:22.952 --rc genhtml_legend=1 00:29:22.952 --rc geninfo_all_blocks=1 00:29:22.952 --rc geninfo_unexecuted_blocks=1 00:29:22.952 00:29:22.952 ' 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:22.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.952 --rc genhtml_branch_coverage=1 00:29:22.952 --rc genhtml_function_coverage=1 00:29:22.952 --rc genhtml_legend=1 00:29:22.952 --rc geninfo_all_blocks=1 00:29:22.952 --rc geninfo_unexecuted_blocks=1 00:29:22.952 00:29:22.952 ' 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.952 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:22.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:22.953 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:28.220 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:28.220 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.220 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:28.221 Found net devices under 0000:86:00.0: cvl_0_0 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:28.221 Found net devices under 0000:86:00.1: cvl_0_1 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:28.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:29:28.221 00:29:28.221 --- 10.0.0.2 ping statistics --- 00:29:28.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.221 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:29:28.221 00:29:28.221 --- 10.0.0.1 ping statistics --- 00:29:28.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.221 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=642367 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 642367 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 642367 ']' 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.221 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.222 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.222 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.222 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=642392 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fba57f9163e8fe2323f5a97fc8b6117f60573abd1a886cc6 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iAX 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fba57f9163e8fe2323f5a97fc8b6117f60573abd1a886cc6 0 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fba57f9163e8fe2323f5a97fc8b6117f60573abd1a886cc6 0 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fba57f9163e8fe2323f5a97fc8b6117f60573abd1a886cc6 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iAX 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iAX 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.iAX 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9428f5d2ed5db9337b8b476e890428d9dcd6f64ef97c1bc23aa77fdfacf6e021 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3tA 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9428f5d2ed5db9337b8b476e890428d9dcd6f64ef97c1bc23aa77fdfacf6e021 3 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9428f5d2ed5db9337b8b476e890428d9dcd6f64ef97c1bc23aa77fdfacf6e021 3 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9428f5d2ed5db9337b8b476e890428d9dcd6f64ef97c1bc23aa77fdfacf6e021 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3tA 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3tA 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.3tA 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a673a853446a98f6a49842554adc28b5 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ICR 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a673a853446a98f6a49842554adc28b5 1 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a673a853446a98f6a49842554adc28b5 1 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a673a853446a98f6a49842554adc28b5 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ICR 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ICR 00:29:28.222 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.ICR 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dee9aefb45c8015e97c94af810482faf601bf5f1502368d0 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.l4N 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dee9aefb45c8015e97c94af810482faf601bf5f1502368d0 2 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dee9aefb45c8015e97c94af810482faf601bf5f1502368d0 2 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dee9aefb45c8015e97c94af810482faf601bf5f1502368d0 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.l4N 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.l4N 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.l4N 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:28.223 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=87fa474cd804a76cdd7d4201c5087d6d783b6028ee06216f 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lm9 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 87fa474cd804a76cdd7d4201c5087d6d783b6028ee06216f 2 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 87fa474cd804a76cdd7d4201c5087d6d783b6028ee06216f 2 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=87fa474cd804a76cdd7d4201c5087d6d783b6028ee06216f 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lm9 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lm9 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.lm9 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc661e5fe15b7ddff6c1c8e694b0edd3 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BLh 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc661e5fe15b7ddff6c1c8e694b0edd3 1 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc661e5fe15b7ddff6c1c8e694b0edd3 1 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc661e5fe15b7ddff6c1c8e694b0edd3 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BLh 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BLh 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.BLh 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3d24d6646d950eaaca30e776ecd07b49071ec129b5cea00d51d4bef51ac9b958 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GMc 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3d24d6646d950eaaca30e776ecd07b49071ec129b5cea00d51d4bef51ac9b958 3 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3d24d6646d950eaaca30e776ecd07b49071ec129b5cea00d51d4bef51ac9b958 3 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3d24d6646d950eaaca30e776ecd07b49071ec129b5cea00d51d4bef51ac9b958 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GMc 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GMc 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.GMc 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:29:28.481 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 642367 00:29:28.482 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 642367 ']' 00:29:28.482 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.482 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.482 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.482 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.482 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.740 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.740 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:29:28.740 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 642392 /var/tmp/host.sock 00:29:28.740 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 642392 ']' 00:29:28.740 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:29:28.740 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.740 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:29:28.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:29:28.740 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.740 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iAX 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.iAX 00:29:28.999 10:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.iAX 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.3tA ]] 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3tA 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3tA 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3tA 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ICR 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ICR 00:29:29.259 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ICR 00:29:29.518 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.l4N ]] 00:29:29.518 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l4N 00:29:29.518 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.518 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.518 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.518 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l4N 00:29:29.518 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l4N 00:29:29.777 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:29:29.777 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lm9 00:29:29.777 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.777 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.777 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.777 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lm9 00:29:29.777 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lm9 00:29:30.035 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.BLh ]] 00:29:30.035 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BLh 00:29:30.035 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.035 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.035 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.035 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BLh 00:29:30.035 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BLh 00:29:30.035 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:29:30.035 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GMc 00:29:30.035 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.035 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.035 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.036 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.GMc 00:29:30.036 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.GMc 00:29:30.294 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:29:30.294 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:29:30.294 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.294 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:30.294 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:29:30.294 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.557 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.817 00:29:30.817 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:30.817 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:30.817 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:31.076 { 00:29:31.076 "cntlid": 1, 00:29:31.076 "qid": 0, 00:29:31.076 "state": "enabled", 00:29:31.076 "thread": "nvmf_tgt_poll_group_000", 00:29:31.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:31.076 "listen_address": { 00:29:31.076 "trtype": "TCP", 00:29:31.076 "adrfam": "IPv4", 00:29:31.076 "traddr": "10.0.0.2", 00:29:31.076 "trsvcid": "4420" 00:29:31.076 }, 00:29:31.076 "peer_address": { 00:29:31.076 "trtype": "TCP", 00:29:31.076 "adrfam": "IPv4", 00:29:31.076 "traddr": "10.0.0.1", 00:29:31.076 "trsvcid": "59808" 00:29:31.076 }, 00:29:31.076 "auth": { 00:29:31.076 "state": "completed", 00:29:31.076 "digest": "sha256", 00:29:31.076 "dhgroup": "null" 00:29:31.076 } 00:29:31.076 } 00:29:31.076 ]' 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:31.076 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:31.335 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:29:31.335 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:29:31.901 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:31.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:31.901 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:31.901 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.901 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.901 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.901 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:31.901 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:29:31.901 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.160 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.419 00:29:32.419 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:32.419 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:32.419 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:32.677 { 00:29:32.677 "cntlid": 3, 00:29:32.677 "qid": 0, 00:29:32.677 "state": "enabled", 00:29:32.677 "thread": "nvmf_tgt_poll_group_000", 00:29:32.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:32.677 "listen_address": { 00:29:32.677 "trtype": "TCP", 00:29:32.677 "adrfam": "IPv4", 00:29:32.677 "traddr": "10.0.0.2", 00:29:32.677 "trsvcid": "4420" 00:29:32.677 }, 00:29:32.677 "peer_address": { 00:29:32.677 "trtype": "TCP", 00:29:32.677 "adrfam": "IPv4", 00:29:32.677 "traddr": "10.0.0.1", 00:29:32.677 "trsvcid": "59842" 00:29:32.677 }, 00:29:32.677 "auth": { 00:29:32.677 "state": "completed", 00:29:32.677 "digest": "sha256", 00:29:32.677 "dhgroup": "null" 00:29:32.677 } 00:29:32.677 } 00:29:32.677 ]' 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:32.677 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:32.936 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:29:32.936 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:29:33.521 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:33.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:33.521 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:33.521 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.521 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:33.521 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.521 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:33.521 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:29:33.521 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.812 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.812 00:29:34.079 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:34.079 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:34.079 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:34.079 { 00:29:34.079 "cntlid": 5, 00:29:34.079 "qid": 0, 00:29:34.079 "state": "enabled", 00:29:34.079 "thread": "nvmf_tgt_poll_group_000", 00:29:34.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:34.079 "listen_address": { 00:29:34.079 "trtype": "TCP", 00:29:34.079 "adrfam": "IPv4", 00:29:34.079 "traddr": "10.0.0.2", 00:29:34.079 "trsvcid": "4420" 00:29:34.079 }, 00:29:34.079 "peer_address": { 00:29:34.079 "trtype": "TCP", 00:29:34.079 "adrfam": "IPv4", 00:29:34.079 "traddr": "10.0.0.1", 00:29:34.079 "trsvcid": "59864" 00:29:34.079 }, 00:29:34.079 "auth": { 00:29:34.079 "state": "completed", 00:29:34.079 "digest": "sha256", 00:29:34.079 "dhgroup": "null" 00:29:34.079 } 00:29:34.079 } 00:29:34.079 ]' 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:34.079 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:34.338 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:34.338 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:34.339 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:34.339 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:29:34.339 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:29:34.905 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:34.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:34.905 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:34.905 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.905 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.163 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:35.164 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:35.422 00:29:35.422 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:35.422 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:35.422 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:35.681 { 00:29:35.681 "cntlid": 7, 00:29:35.681 "qid": 0, 00:29:35.681 "state": "enabled", 00:29:35.681 "thread": "nvmf_tgt_poll_group_000", 00:29:35.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:35.681 "listen_address": { 00:29:35.681 "trtype": "TCP", 00:29:35.681 "adrfam": "IPv4", 00:29:35.681 "traddr": "10.0.0.2", 00:29:35.681 "trsvcid": "4420" 00:29:35.681 }, 00:29:35.681 "peer_address": { 00:29:35.681 "trtype": "TCP", 00:29:35.681 "adrfam": "IPv4", 00:29:35.681 "traddr": "10.0.0.1", 00:29:35.681 "trsvcid": "58108" 00:29:35.681 }, 00:29:35.681 "auth": { 00:29:35.681 "state": "completed", 00:29:35.681 "digest": "sha256", 00:29:35.681 "dhgroup": "null" 00:29:35.681 } 00:29:35.681 } 00:29:35.681 ]' 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:35.681 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:35.941 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:35.941 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:35.941 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:35.941 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:29:35.941 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:29:36.508 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:36.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:36.508 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:36.508 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.508 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.508 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.508 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:36.508 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:36.508 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:36.508 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:36.767 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:37.025 00:29:37.025 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:37.025 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:37.025 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:37.284 { 00:29:37.284 "cntlid": 9, 00:29:37.284 "qid": 0, 00:29:37.284 "state": "enabled", 00:29:37.284 "thread": "nvmf_tgt_poll_group_000", 00:29:37.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:37.284 "listen_address": { 00:29:37.284 "trtype": "TCP", 00:29:37.284 "adrfam": "IPv4", 00:29:37.284 "traddr": "10.0.0.2", 00:29:37.284 "trsvcid": "4420" 00:29:37.284 }, 00:29:37.284 "peer_address": { 00:29:37.284 "trtype": "TCP", 00:29:37.284 "adrfam": "IPv4", 00:29:37.284 "traddr": "10.0.0.1", 00:29:37.284 "trsvcid": "58126" 00:29:37.284 }, 00:29:37.284 "auth": { 00:29:37.284 "state": "completed", 00:29:37.284 "digest": "sha256", 00:29:37.284 "dhgroup": "ffdhe2048" 00:29:37.284 } 00:29:37.284 } 00:29:37.284 ]' 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:37.284 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:37.542 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:37.542 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:37.542 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:37.542 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:29:37.542 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:29:38.109 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:38.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:38.109 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:38.109 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.109 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.109 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.109 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:38.109 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:38.109 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.368 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.626 00:29:38.626 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:38.626 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:38.626 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:38.885 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.885 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:38.885 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.885 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.885 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.885 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:38.885 { 00:29:38.885 "cntlid": 11, 00:29:38.885 "qid": 0, 00:29:38.885 "state": "enabled", 00:29:38.885 "thread": "nvmf_tgt_poll_group_000", 00:29:38.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:38.885 "listen_address": { 00:29:38.885 "trtype": "TCP", 00:29:38.885 "adrfam": "IPv4", 00:29:38.885 "traddr": "10.0.0.2", 00:29:38.885 "trsvcid": "4420" 00:29:38.885 }, 00:29:38.885 "peer_address": { 00:29:38.885 "trtype": "TCP", 00:29:38.885 "adrfam": "IPv4", 00:29:38.885 "traddr": "10.0.0.1", 00:29:38.885 "trsvcid": "58158" 00:29:38.885 }, 00:29:38.885 "auth": { 00:29:38.885 "state": "completed", 00:29:38.885 "digest": "sha256", 00:29:38.885 "dhgroup": "ffdhe2048" 00:29:38.885 } 00:29:38.885 } 00:29:38.885 ]' 00:29:38.885 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:38.885 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:38.885 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:38.885 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:38.885 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:39.143 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:39.143 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:39.143 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:39.143 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:29:39.143 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:29:39.712 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:39.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:39.712 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:39.712 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.712 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:39.712 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.712 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:39.712 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:39.712 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.971 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.972 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.972 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:40.231 00:29:40.231 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:40.231 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:40.231 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:40.490 { 00:29:40.490 "cntlid": 13, 00:29:40.490 "qid": 0, 00:29:40.490 "state": "enabled", 00:29:40.490 "thread": "nvmf_tgt_poll_group_000", 00:29:40.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:40.490 "listen_address": { 00:29:40.490 "trtype": "TCP", 00:29:40.490 "adrfam": "IPv4", 00:29:40.490 "traddr": "10.0.0.2", 00:29:40.490 "trsvcid": "4420" 00:29:40.490 }, 00:29:40.490 "peer_address": { 00:29:40.490 "trtype": "TCP", 00:29:40.490 "adrfam": "IPv4", 00:29:40.490 "traddr": "10.0.0.1", 00:29:40.490 "trsvcid": "58178" 00:29:40.490 }, 00:29:40.490 "auth": { 00:29:40.490 "state": "completed", 00:29:40.490 "digest": "sha256", 00:29:40.490 "dhgroup": "ffdhe2048" 00:29:40.490 } 00:29:40.490 } 00:29:40.490 ]' 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:40.490 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:40.749 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:29:40.749 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:29:41.317 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:41.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:41.317 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:41.317 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.317 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.317 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.317 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:41.317 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:41.317 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:41.576 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:29:41.576 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:41.576 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:41.576 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:41.576 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:41.576 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:41.576 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:29:41.576 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.576 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.577 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.577 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:41.577 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:41.577 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:41.836 00:29:41.836 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:41.836 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:41.836 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:42.095 { 00:29:42.095 "cntlid": 15, 00:29:42.095 "qid": 0, 00:29:42.095 "state": "enabled", 00:29:42.095 "thread": "nvmf_tgt_poll_group_000", 00:29:42.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:42.095 "listen_address": { 00:29:42.095 "trtype": "TCP", 00:29:42.095 "adrfam": "IPv4", 00:29:42.095 "traddr": "10.0.0.2", 00:29:42.095 "trsvcid": "4420" 00:29:42.095 }, 00:29:42.095 "peer_address": { 00:29:42.095 "trtype": "TCP", 00:29:42.095 "adrfam": "IPv4", 00:29:42.095 "traddr": "10.0.0.1", 00:29:42.095 "trsvcid": "58192" 00:29:42.095 }, 00:29:42.095 "auth": { 00:29:42.095 "state": "completed", 00:29:42.095 "digest": "sha256", 00:29:42.095 "dhgroup": "ffdhe2048" 00:29:42.095 } 00:29:42.095 } 00:29:42.095 ]' 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:42.095 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:42.353 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:42.353 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:42.353 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:42.353 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:29:42.353 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:29:42.921 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:42.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:42.921 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:42.921 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.921 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:42.921 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.921 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:42.921 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:42.921 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:42.921 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.180 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.438 00:29:43.438 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:43.438 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:43.438 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:43.697 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.697 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:43.697 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.697 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.697 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.697 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:43.697 { 00:29:43.697 "cntlid": 17, 00:29:43.697 "qid": 0, 00:29:43.697 "state": "enabled", 00:29:43.697 "thread": "nvmf_tgt_poll_group_000", 00:29:43.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:43.697 "listen_address": { 00:29:43.697 "trtype": "TCP", 00:29:43.697 "adrfam": "IPv4", 00:29:43.697 "traddr": "10.0.0.2", 00:29:43.697 "trsvcid": "4420" 00:29:43.697 }, 00:29:43.697 "peer_address": { 00:29:43.697 "trtype": "TCP", 00:29:43.697 "adrfam": "IPv4", 00:29:43.697 "traddr": "10.0.0.1", 00:29:43.698 "trsvcid": "58204" 00:29:43.698 }, 00:29:43.698 "auth": { 00:29:43.698 "state": "completed", 00:29:43.698 "digest": "sha256", 00:29:43.698 "dhgroup": "ffdhe3072" 00:29:43.698 } 00:29:43.698 } 00:29:43.698 ]' 00:29:43.698 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:43.698 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:43.698 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:43.698 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:43.698 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:43.698 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:43.698 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:43.698 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:43.957 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:29:43.957 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:29:44.523 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:44.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:44.523 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:44.523 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.523 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:44.523 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.523 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:44.523 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:44.523 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.781 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.782 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.782 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.040 00:29:45.040 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:45.040 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:45.040 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:45.298 { 00:29:45.298 "cntlid": 19, 00:29:45.298 "qid": 0, 00:29:45.298 "state": "enabled", 00:29:45.298 "thread": "nvmf_tgt_poll_group_000", 00:29:45.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:45.298 "listen_address": { 00:29:45.298 "trtype": "TCP", 00:29:45.298 "adrfam": "IPv4", 00:29:45.298 "traddr": "10.0.0.2", 00:29:45.298 "trsvcid": "4420" 00:29:45.298 }, 00:29:45.298 "peer_address": { 00:29:45.298 "trtype": "TCP", 00:29:45.298 "adrfam": "IPv4", 00:29:45.298 "traddr": "10.0.0.1", 00:29:45.298 "trsvcid": "58224" 00:29:45.298 }, 00:29:45.298 "auth": { 00:29:45.298 "state": "completed", 00:29:45.298 "digest": "sha256", 00:29:45.298 "dhgroup": "ffdhe3072" 00:29:45.298 } 00:29:45.298 } 00:29:45.298 ]' 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:45.298 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:45.557 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:29:45.557 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:29:46.124 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:46.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:46.124 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.124 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.124 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.124 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.124 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:46.124 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:46.124 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.383 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.640 00:29:46.640 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:46.640 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:46.640 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:46.900 { 00:29:46.900 "cntlid": 21, 00:29:46.900 "qid": 0, 00:29:46.900 "state": "enabled", 00:29:46.900 "thread": "nvmf_tgt_poll_group_000", 00:29:46.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:46.900 "listen_address": { 00:29:46.900 "trtype": "TCP", 00:29:46.900 "adrfam": "IPv4", 00:29:46.900 "traddr": "10.0.0.2", 00:29:46.900 "trsvcid": "4420" 00:29:46.900 }, 00:29:46.900 "peer_address": { 00:29:46.900 "trtype": "TCP", 00:29:46.900 "adrfam": "IPv4", 00:29:46.900 "traddr": "10.0.0.1", 00:29:46.900 "trsvcid": "42710" 00:29:46.900 }, 00:29:46.900 "auth": { 00:29:46.900 "state": "completed", 00:29:46.900 "digest": "sha256", 00:29:46.900 "dhgroup": "ffdhe3072" 00:29:46.900 } 00:29:46.900 } 00:29:46.900 ]' 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:46.900 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:46.900 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:46.900 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:46.900 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:47.158 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:29:47.158 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:29:47.725 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:47.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:47.725 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:47.725 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.725 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.725 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.725 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:47.725 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:47.725 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:47.983 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:48.265 00:29:48.265 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:48.265 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:48.265 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:48.523 { 00:29:48.523 "cntlid": 23, 00:29:48.523 "qid": 0, 00:29:48.523 "state": "enabled", 00:29:48.523 "thread": "nvmf_tgt_poll_group_000", 00:29:48.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:48.523 "listen_address": { 00:29:48.523 "trtype": "TCP", 00:29:48.523 "adrfam": "IPv4", 00:29:48.523 "traddr": "10.0.0.2", 00:29:48.523 "trsvcid": "4420" 00:29:48.523 }, 00:29:48.523 "peer_address": { 00:29:48.523 "trtype": "TCP", 00:29:48.523 "adrfam": "IPv4", 00:29:48.523 "traddr": "10.0.0.1", 00:29:48.523 "trsvcid": "42746" 00:29:48.523 }, 00:29:48.523 "auth": { 00:29:48.523 "state": "completed", 00:29:48.523 "digest": "sha256", 00:29:48.523 "dhgroup": "ffdhe3072" 00:29:48.523 } 00:29:48.523 } 00:29:48.523 ]' 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:48.523 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:48.780 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:29:48.781 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:29:49.347 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:49.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:49.347 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:49.347 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.347 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:49.347 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.347 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:49.347 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:49.347 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:49.347 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.604 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.862 00:29:49.862 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:49.862 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:49.862 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:50.121 { 00:29:50.121 "cntlid": 25, 00:29:50.121 "qid": 0, 00:29:50.121 "state": "enabled", 00:29:50.121 "thread": "nvmf_tgt_poll_group_000", 00:29:50.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:50.121 "listen_address": { 00:29:50.121 "trtype": "TCP", 00:29:50.121 "adrfam": "IPv4", 00:29:50.121 "traddr": "10.0.0.2", 00:29:50.121 "trsvcid": "4420" 00:29:50.121 }, 00:29:50.121 "peer_address": { 00:29:50.121 "trtype": "TCP", 00:29:50.121 "adrfam": "IPv4", 00:29:50.121 "traddr": "10.0.0.1", 00:29:50.121 "trsvcid": "42766" 00:29:50.121 }, 00:29:50.121 "auth": { 00:29:50.121 "state": "completed", 00:29:50.121 "digest": "sha256", 00:29:50.121 "dhgroup": "ffdhe4096" 00:29:50.121 } 00:29:50.121 } 00:29:50.121 ]' 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:50.121 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:50.379 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:29:50.379 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:29:50.946 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:50.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:50.946 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:50.947 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.947 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.947 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.947 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:50.947 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:50.947 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:51.206 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:51.465 00:29:51.465 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:51.465 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:51.465 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:51.725 { 00:29:51.725 "cntlid": 27, 00:29:51.725 "qid": 0, 00:29:51.725 "state": "enabled", 00:29:51.725 "thread": "nvmf_tgt_poll_group_000", 00:29:51.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:51.725 "listen_address": { 00:29:51.725 "trtype": "TCP", 00:29:51.725 "adrfam": "IPv4", 00:29:51.725 "traddr": "10.0.0.2", 00:29:51.725 "trsvcid": "4420" 00:29:51.725 }, 00:29:51.725 "peer_address": { 00:29:51.725 "trtype": "TCP", 00:29:51.725 "adrfam": "IPv4", 00:29:51.725 "traddr": "10.0.0.1", 00:29:51.725 "trsvcid": "42804" 00:29:51.725 }, 00:29:51.725 "auth": { 00:29:51.725 "state": "completed", 00:29:51.725 "digest": "sha256", 00:29:51.725 "dhgroup": "ffdhe4096" 00:29:51.725 } 00:29:51.725 } 00:29:51.725 ]' 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:51.725 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:51.984 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:29:51.984 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:29:52.551 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:52.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:52.551 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:52.551 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.551 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.551 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.551 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:52.551 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:52.552 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:52.811 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:53.071 00:29:53.071 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:53.071 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:53.071 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:53.330 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:53.331 { 00:29:53.331 "cntlid": 29, 00:29:53.331 "qid": 0, 00:29:53.331 "state": "enabled", 00:29:53.331 "thread": "nvmf_tgt_poll_group_000", 00:29:53.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:53.331 "listen_address": { 00:29:53.331 "trtype": "TCP", 00:29:53.331 "adrfam": "IPv4", 00:29:53.331 "traddr": "10.0.0.2", 00:29:53.331 "trsvcid": "4420" 00:29:53.331 }, 00:29:53.331 "peer_address": { 00:29:53.331 "trtype": "TCP", 00:29:53.331 "adrfam": "IPv4", 00:29:53.331 "traddr": "10.0.0.1", 00:29:53.331 "trsvcid": "42832" 00:29:53.331 }, 00:29:53.331 "auth": { 00:29:53.331 "state": "completed", 00:29:53.331 "digest": "sha256", 00:29:53.331 "dhgroup": "ffdhe4096" 00:29:53.331 } 00:29:53.331 } 00:29:53.331 ]' 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:53.331 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:53.590 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:29:53.590 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:29:54.157 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:54.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:54.157 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:54.157 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.157 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.157 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.157 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:54.157 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:54.157 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:54.415 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:54.673 00:29:54.673 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:54.673 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:54.673 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:54.932 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.932 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:54.932 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.932 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.932 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.932 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:54.932 { 00:29:54.932 "cntlid": 31, 00:29:54.932 "qid": 0, 00:29:54.932 "state": "enabled", 00:29:54.932 "thread": "nvmf_tgt_poll_group_000", 00:29:54.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:54.932 "listen_address": { 00:29:54.932 "trtype": "TCP", 00:29:54.932 "adrfam": "IPv4", 00:29:54.932 "traddr": "10.0.0.2", 00:29:54.932 "trsvcid": "4420" 00:29:54.932 }, 00:29:54.932 "peer_address": { 00:29:54.932 "trtype": "TCP", 00:29:54.932 "adrfam": "IPv4", 00:29:54.932 "traddr": "10.0.0.1", 00:29:54.932 "trsvcid": "42858" 00:29:54.932 }, 00:29:54.932 "auth": { 00:29:54.932 "state": "completed", 00:29:54.932 "digest": "sha256", 00:29:54.932 "dhgroup": "ffdhe4096" 00:29:54.932 } 00:29:54.932 } 00:29:54.932 ]' 00:29:54.932 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:54.932 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:54.932 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:54.932 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:54.932 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:54.932 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:54.932 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:54.932 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:55.332 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:29:55.332 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:29:55.633 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:55.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:55.892 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:55.892 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.892 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:55.892 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.892 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:55.892 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:55.892 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:55.892 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:55.892 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:29:55.892 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:55.892 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:55.892 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:55.892 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:55.892 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:55.892 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:55.892 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.892 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:55.893 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.893 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:55.893 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:55.893 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:56.461 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:56.461 { 00:29:56.461 "cntlid": 33, 00:29:56.461 "qid": 0, 00:29:56.461 "state": "enabled", 00:29:56.461 "thread": "nvmf_tgt_poll_group_000", 00:29:56.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:56.461 "listen_address": { 00:29:56.461 "trtype": "TCP", 00:29:56.461 "adrfam": "IPv4", 00:29:56.461 "traddr": "10.0.0.2", 00:29:56.461 "trsvcid": "4420" 00:29:56.461 }, 00:29:56.461 "peer_address": { 00:29:56.461 "trtype": "TCP", 00:29:56.461 "adrfam": "IPv4", 00:29:56.461 "traddr": "10.0.0.1", 00:29:56.461 "trsvcid": "34162" 00:29:56.461 }, 00:29:56.461 "auth": { 00:29:56.461 "state": "completed", 00:29:56.461 "digest": "sha256", 00:29:56.461 "dhgroup": "ffdhe6144" 00:29:56.461 } 00:29:56.461 } 00:29:56.461 ]' 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:56.461 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:56.720 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:56.720 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:56.720 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:56.720 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:56.720 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:56.978 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:29:56.978 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:57.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.543 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:57.800 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.800 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:57.800 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:57.800 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:58.058 00:29:58.058 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:58.058 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:58.058 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:58.316 { 00:29:58.316 "cntlid": 35, 00:29:58.316 "qid": 0, 00:29:58.316 "state": "enabled", 00:29:58.316 "thread": "nvmf_tgt_poll_group_000", 00:29:58.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:58.316 "listen_address": { 00:29:58.316 "trtype": "TCP", 00:29:58.316 "adrfam": "IPv4", 00:29:58.316 "traddr": "10.0.0.2", 00:29:58.316 "trsvcid": "4420" 00:29:58.316 }, 00:29:58.316 "peer_address": { 00:29:58.316 "trtype": "TCP", 00:29:58.316 "adrfam": "IPv4", 00:29:58.316 "traddr": "10.0.0.1", 00:29:58.316 "trsvcid": "34188" 00:29:58.316 }, 00:29:58.316 "auth": { 00:29:58.316 "state": "completed", 00:29:58.316 "digest": "sha256", 00:29:58.316 "dhgroup": "ffdhe6144" 00:29:58.316 } 00:29:58.316 } 00:29:58.316 ]' 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:58.316 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:58.574 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:29:58.574 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:29:59.157 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:59.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:59.157 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:59.157 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.157 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:59.157 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.157 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:59.157 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:59.157 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:59.418 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:59.676 00:29:59.676 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:59.676 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:59.676 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:59.933 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.933 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:59.933 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.933 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:59.933 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.933 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:59.933 { 00:29:59.933 "cntlid": 37, 00:29:59.933 "qid": 0, 00:29:59.933 "state": "enabled", 00:29:59.933 "thread": "nvmf_tgt_poll_group_000", 00:29:59.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:29:59.933 "listen_address": { 00:29:59.933 "trtype": "TCP", 00:29:59.933 "adrfam": "IPv4", 00:29:59.933 "traddr": "10.0.0.2", 00:29:59.933 "trsvcid": "4420" 00:29:59.933 }, 00:29:59.933 "peer_address": { 00:29:59.933 "trtype": "TCP", 00:29:59.933 "adrfam": "IPv4", 00:29:59.933 "traddr": "10.0.0.1", 00:29:59.933 "trsvcid": "34230" 00:29:59.933 }, 00:29:59.933 "auth": { 00:29:59.933 "state": "completed", 00:29:59.933 "digest": "sha256", 00:29:59.933 "dhgroup": "ffdhe6144" 00:29:59.933 } 00:29:59.933 } 00:29:59.933 ]' 00:29:59.933 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:59.933 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:59.933 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:59.933 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:59.933 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:59.933 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:59.933 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:59.933 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:00.195 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:00.195 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:00.762 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:00.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:00.762 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:00.762 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.762 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:00.762 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.762 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:00.762 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:00.762 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:01.020 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:01.277 00:30:01.277 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:01.277 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:01.277 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:01.534 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.534 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:01.534 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.534 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.534 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.534 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:01.534 { 00:30:01.534 "cntlid": 39, 00:30:01.534 "qid": 0, 00:30:01.534 "state": "enabled", 00:30:01.534 "thread": "nvmf_tgt_poll_group_000", 00:30:01.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:01.534 "listen_address": { 00:30:01.534 "trtype": "TCP", 00:30:01.534 "adrfam": "IPv4", 00:30:01.534 "traddr": "10.0.0.2", 00:30:01.534 "trsvcid": "4420" 00:30:01.534 }, 00:30:01.534 "peer_address": { 00:30:01.534 "trtype": "TCP", 00:30:01.534 "adrfam": "IPv4", 00:30:01.534 "traddr": "10.0.0.1", 00:30:01.534 "trsvcid": "34268" 00:30:01.534 }, 00:30:01.534 "auth": { 00:30:01.534 "state": "completed", 00:30:01.534 "digest": "sha256", 00:30:01.534 "dhgroup": "ffdhe6144" 00:30:01.534 } 00:30:01.534 } 00:30:01.534 ]' 00:30:01.534 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:01.534 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:30:01.534 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:01.791 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:01.791 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:01.791 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:01.791 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:01.791 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:01.791 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:01.791 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:02.357 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:02.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:02.357 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:02.357 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.357 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:02.615 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.615 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:02.615 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:02.615 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:02.616 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:03.183 00:30:03.183 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:03.183 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:03.183 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:03.442 { 00:30:03.442 "cntlid": 41, 00:30:03.442 "qid": 0, 00:30:03.442 "state": "enabled", 00:30:03.442 "thread": "nvmf_tgt_poll_group_000", 00:30:03.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:03.442 "listen_address": { 00:30:03.442 "trtype": "TCP", 00:30:03.442 "adrfam": "IPv4", 00:30:03.442 "traddr": "10.0.0.2", 00:30:03.442 "trsvcid": "4420" 00:30:03.442 }, 00:30:03.442 "peer_address": { 00:30:03.442 "trtype": "TCP", 00:30:03.442 "adrfam": "IPv4", 00:30:03.442 "traddr": "10.0.0.1", 00:30:03.442 "trsvcid": "34298" 00:30:03.442 }, 00:30:03.442 "auth": { 00:30:03.442 "state": "completed", 00:30:03.442 "digest": "sha256", 00:30:03.442 "dhgroup": "ffdhe8192" 00:30:03.442 } 00:30:03.442 } 00:30:03.442 ]' 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:03.442 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:03.701 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:03.701 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:04.268 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:04.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:04.268 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:04.268 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.268 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:04.268 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.268 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:04.268 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:04.268 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:04.525 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.092 00:30:05.092 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:05.092 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:05.092 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:05.092 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:05.350 { 00:30:05.350 "cntlid": 43, 00:30:05.350 "qid": 0, 00:30:05.350 "state": "enabled", 00:30:05.350 "thread": "nvmf_tgt_poll_group_000", 00:30:05.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:05.350 "listen_address": { 00:30:05.350 "trtype": "TCP", 00:30:05.350 "adrfam": "IPv4", 00:30:05.350 "traddr": "10.0.0.2", 00:30:05.350 "trsvcid": "4420" 00:30:05.350 }, 00:30:05.350 "peer_address": { 00:30:05.350 "trtype": "TCP", 00:30:05.350 "adrfam": "IPv4", 00:30:05.350 "traddr": "10.0.0.1", 00:30:05.350 "trsvcid": "34314" 00:30:05.350 }, 00:30:05.350 "auth": { 00:30:05.350 "state": "completed", 00:30:05.350 "digest": "sha256", 00:30:05.350 "dhgroup": "ffdhe8192" 00:30:05.350 } 00:30:05.350 } 00:30:05.350 ]' 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:05.350 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:05.608 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:05.608 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:06.174 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:06.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:06.174 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:06.174 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.174 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.174 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.174 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:06.174 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:06.174 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:06.431 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:30:06.431 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:06.431 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:30:06.431 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:06.431 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:06.431 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:06.431 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.431 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.432 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.432 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.432 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.432 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.432 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:07.036 00:30:07.036 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:07.036 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:07.036 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:07.036 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.036 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:07.036 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.036 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:07.036 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.036 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:07.036 { 00:30:07.036 "cntlid": 45, 00:30:07.036 "qid": 0, 00:30:07.036 "state": "enabled", 00:30:07.036 "thread": "nvmf_tgt_poll_group_000", 00:30:07.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:07.036 "listen_address": { 00:30:07.036 "trtype": "TCP", 00:30:07.036 "adrfam": "IPv4", 00:30:07.036 "traddr": "10.0.0.2", 00:30:07.036 "trsvcid": "4420" 00:30:07.036 }, 00:30:07.036 "peer_address": { 00:30:07.036 "trtype": "TCP", 00:30:07.036 "adrfam": "IPv4", 00:30:07.036 "traddr": "10.0.0.1", 00:30:07.036 "trsvcid": "47676" 00:30:07.036 }, 00:30:07.036 "auth": { 00:30:07.036 "state": "completed", 00:30:07.036 "digest": "sha256", 00:30:07.036 "dhgroup": "ffdhe8192" 00:30:07.036 } 00:30:07.036 } 00:30:07.036 ]' 00:30:07.036 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:07.036 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:30:07.036 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:07.293 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:07.293 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:07.293 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:07.293 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:07.293 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:07.293 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:07.293 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:07.858 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:07.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:07.858 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:07.858 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.858 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:08.118 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:08.683 00:30:08.683 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:08.683 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:08.683 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:08.940 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.940 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:08.940 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.940 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.940 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.940 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:08.940 { 00:30:08.940 "cntlid": 47, 00:30:08.940 "qid": 0, 00:30:08.940 "state": "enabled", 00:30:08.940 "thread": "nvmf_tgt_poll_group_000", 00:30:08.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:08.940 "listen_address": { 00:30:08.940 "trtype": "TCP", 00:30:08.940 "adrfam": "IPv4", 00:30:08.940 "traddr": "10.0.0.2", 00:30:08.940 "trsvcid": "4420" 00:30:08.940 }, 00:30:08.940 "peer_address": { 00:30:08.940 "trtype": "TCP", 00:30:08.940 "adrfam": "IPv4", 00:30:08.941 "traddr": "10.0.0.1", 00:30:08.941 "trsvcid": "47694" 00:30:08.941 }, 00:30:08.941 "auth": { 00:30:08.941 "state": "completed", 00:30:08.941 "digest": "sha256", 00:30:08.941 "dhgroup": "ffdhe8192" 00:30:08.941 } 00:30:08.941 } 00:30:08.941 ]' 00:30:08.941 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:08.941 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:30:08.941 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:08.941 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:08.941 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:08.941 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:08.941 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:08.941 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:09.198 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:09.198 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:09.763 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:09.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:09.763 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:09.763 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.763 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:09.763 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.763 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:30:09.763 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:09.763 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:09.763 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:30:09.763 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:10.021 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:10.279 00:30:10.279 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:10.279 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:10.279 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:10.537 { 00:30:10.537 "cntlid": 49, 00:30:10.537 "qid": 0, 00:30:10.537 "state": "enabled", 00:30:10.537 "thread": "nvmf_tgt_poll_group_000", 00:30:10.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:10.537 "listen_address": { 00:30:10.537 "trtype": "TCP", 00:30:10.537 "adrfam": "IPv4", 00:30:10.537 "traddr": "10.0.0.2", 00:30:10.537 "trsvcid": "4420" 00:30:10.537 }, 00:30:10.537 "peer_address": { 00:30:10.537 "trtype": "TCP", 00:30:10.537 "adrfam": "IPv4", 00:30:10.537 "traddr": "10.0.0.1", 00:30:10.537 "trsvcid": "47716" 00:30:10.537 }, 00:30:10.537 "auth": { 00:30:10.537 "state": "completed", 00:30:10.537 "digest": "sha384", 00:30:10.537 "dhgroup": "null" 00:30:10.537 } 00:30:10.537 } 00:30:10.537 ]' 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:10.537 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:10.795 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:10.795 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:11.361 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:11.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:11.361 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:11.361 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.361 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:11.361 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.361 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:11.361 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:30:11.361 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.620 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.879 00:30:11.879 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:11.879 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:11.879 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:12.138 { 00:30:12.138 "cntlid": 51, 00:30:12.138 "qid": 0, 00:30:12.138 "state": "enabled", 00:30:12.138 "thread": "nvmf_tgt_poll_group_000", 00:30:12.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:12.138 "listen_address": { 00:30:12.138 "trtype": "TCP", 00:30:12.138 "adrfam": "IPv4", 00:30:12.138 "traddr": "10.0.0.2", 00:30:12.138 "trsvcid": "4420" 00:30:12.138 }, 00:30:12.138 "peer_address": { 00:30:12.138 "trtype": "TCP", 00:30:12.138 "adrfam": "IPv4", 00:30:12.138 "traddr": "10.0.0.1", 00:30:12.138 "trsvcid": "47744" 00:30:12.138 }, 00:30:12.138 "auth": { 00:30:12.138 "state": "completed", 00:30:12.138 "digest": "sha384", 00:30:12.138 "dhgroup": "null" 00:30:12.138 } 00:30:12.138 } 00:30:12.138 ]' 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:12.138 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:12.396 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:12.396 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:12.963 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:12.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:12.963 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:12.963 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.963 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.963 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.963 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:12.963 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:30:12.963 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.221 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.479 00:30:13.479 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:13.479 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:13.479 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:13.479 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.479 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:13.479 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.479 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:13.737 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.737 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:13.737 { 00:30:13.737 "cntlid": 53, 00:30:13.737 "qid": 0, 00:30:13.737 "state": "enabled", 00:30:13.737 "thread": "nvmf_tgt_poll_group_000", 00:30:13.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:13.737 "listen_address": { 00:30:13.737 "trtype": "TCP", 00:30:13.737 "adrfam": "IPv4", 00:30:13.737 "traddr": "10.0.0.2", 00:30:13.737 "trsvcid": "4420" 00:30:13.737 }, 00:30:13.737 "peer_address": { 00:30:13.737 "trtype": "TCP", 00:30:13.737 "adrfam": "IPv4", 00:30:13.737 "traddr": "10.0.0.1", 00:30:13.737 "trsvcid": "47762" 00:30:13.737 }, 00:30:13.737 "auth": { 00:30:13.737 "state": "completed", 00:30:13.737 "digest": "sha384", 00:30:13.737 "dhgroup": "null" 00:30:13.737 } 00:30:13.737 } 00:30:13.737 ]' 00:30:13.737 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:13.737 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:13.737 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:13.737 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:30:13.737 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:13.737 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:13.737 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:13.737 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:13.995 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:13.995 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:14.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:14.561 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:14.818 00:30:14.818 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:14.818 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:14.818 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:15.076 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.076 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:15.076 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.076 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:15.076 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.076 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:15.076 { 00:30:15.076 "cntlid": 55, 00:30:15.076 "qid": 0, 00:30:15.076 "state": "enabled", 00:30:15.076 "thread": "nvmf_tgt_poll_group_000", 00:30:15.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:15.076 "listen_address": { 00:30:15.076 "trtype": "TCP", 00:30:15.076 "adrfam": "IPv4", 00:30:15.076 "traddr": "10.0.0.2", 00:30:15.076 "trsvcid": "4420" 00:30:15.076 }, 00:30:15.076 "peer_address": { 00:30:15.076 "trtype": "TCP", 00:30:15.076 "adrfam": "IPv4", 00:30:15.076 "traddr": "10.0.0.1", 00:30:15.076 "trsvcid": "47782" 00:30:15.076 }, 00:30:15.076 "auth": { 00:30:15.076 "state": "completed", 00:30:15.076 "digest": "sha384", 00:30:15.076 "dhgroup": "null" 00:30:15.076 } 00:30:15.076 } 00:30:15.076 ]' 00:30:15.076 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:15.076 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:15.076 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:15.334 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:30:15.334 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:15.334 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:15.334 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:15.334 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:15.334 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:15.334 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:15.899 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:15.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:15.899 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:15.899 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.899 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:15.899 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.899 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:15.899 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:15.899 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:15.899 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.157 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:16.158 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:16.158 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:16.416 00:30:16.416 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:16.416 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:16.416 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:16.674 { 00:30:16.674 "cntlid": 57, 00:30:16.674 "qid": 0, 00:30:16.674 "state": "enabled", 00:30:16.674 "thread": "nvmf_tgt_poll_group_000", 00:30:16.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:16.674 "listen_address": { 00:30:16.674 "trtype": "TCP", 00:30:16.674 "adrfam": "IPv4", 00:30:16.674 "traddr": "10.0.0.2", 00:30:16.674 "trsvcid": "4420" 00:30:16.674 }, 00:30:16.674 "peer_address": { 00:30:16.674 "trtype": "TCP", 00:30:16.674 "adrfam": "IPv4", 00:30:16.674 "traddr": "10.0.0.1", 00:30:16.674 "trsvcid": "54294" 00:30:16.674 }, 00:30:16.674 "auth": { 00:30:16.674 "state": "completed", 00:30:16.674 "digest": "sha384", 00:30:16.674 "dhgroup": "ffdhe2048" 00:30:16.674 } 00:30:16.674 } 00:30:16.674 ]' 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:16.674 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:16.932 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:16.932 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:16.932 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:16.932 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:16.932 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:17.499 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:17.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:17.499 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.499 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.499 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.499 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.499 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:17.499 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:17.499 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:17.758 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:18.015 00:30:18.015 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:18.015 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:18.015 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:18.272 { 00:30:18.272 "cntlid": 59, 00:30:18.272 "qid": 0, 00:30:18.272 "state": "enabled", 00:30:18.272 "thread": "nvmf_tgt_poll_group_000", 00:30:18.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:18.272 "listen_address": { 00:30:18.272 "trtype": "TCP", 00:30:18.272 "adrfam": "IPv4", 00:30:18.272 "traddr": "10.0.0.2", 00:30:18.272 "trsvcid": "4420" 00:30:18.272 }, 00:30:18.272 "peer_address": { 00:30:18.272 "trtype": "TCP", 00:30:18.272 "adrfam": "IPv4", 00:30:18.272 "traddr": "10.0.0.1", 00:30:18.272 "trsvcid": "54330" 00:30:18.272 }, 00:30:18.272 "auth": { 00:30:18.272 "state": "completed", 00:30:18.272 "digest": "sha384", 00:30:18.272 "dhgroup": "ffdhe2048" 00:30:18.272 } 00:30:18.272 } 00:30:18.272 ]' 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:18.272 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:18.530 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:18.530 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:19.095 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:19.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:19.095 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:19.095 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.095 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:19.095 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.095 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:19.095 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:19.095 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:19.354 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:19.613 00:30:19.613 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:19.613 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:19.613 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:19.872 { 00:30:19.872 "cntlid": 61, 00:30:19.872 "qid": 0, 00:30:19.872 "state": "enabled", 00:30:19.872 "thread": "nvmf_tgt_poll_group_000", 00:30:19.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:19.872 "listen_address": { 00:30:19.872 "trtype": "TCP", 00:30:19.872 "adrfam": "IPv4", 00:30:19.872 "traddr": "10.0.0.2", 00:30:19.872 "trsvcid": "4420" 00:30:19.872 }, 00:30:19.872 "peer_address": { 00:30:19.872 "trtype": "TCP", 00:30:19.872 "adrfam": "IPv4", 00:30:19.872 "traddr": "10.0.0.1", 00:30:19.872 "trsvcid": "54356" 00:30:19.872 }, 00:30:19.872 "auth": { 00:30:19.872 "state": "completed", 00:30:19.872 "digest": "sha384", 00:30:19.872 "dhgroup": "ffdhe2048" 00:30:19.872 } 00:30:19.872 } 00:30:19.872 ]' 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:19.872 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:19.872 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:19.872 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:19.872 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:20.130 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:20.130 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:20.698 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:20.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:20.698 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:20.698 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.698 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.698 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.698 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:20.698 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:20.698 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:20.957 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:30:20.957 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:20.957 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:20.957 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:20.957 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:20.957 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:20.957 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:30:20.957 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.957 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.957 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.957 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:20.957 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:20.957 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:21.215 00:30:21.215 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:21.215 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:21.215 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:21.474 { 00:30:21.474 "cntlid": 63, 00:30:21.474 "qid": 0, 00:30:21.474 "state": "enabled", 00:30:21.474 "thread": "nvmf_tgt_poll_group_000", 00:30:21.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:21.474 "listen_address": { 00:30:21.474 "trtype": "TCP", 00:30:21.474 "adrfam": "IPv4", 00:30:21.474 "traddr": "10.0.0.2", 00:30:21.474 "trsvcid": "4420" 00:30:21.474 }, 00:30:21.474 "peer_address": { 00:30:21.474 "trtype": "TCP", 00:30:21.474 "adrfam": "IPv4", 00:30:21.474 "traddr": "10.0.0.1", 00:30:21.474 "trsvcid": "54386" 00:30:21.474 }, 00:30:21.474 "auth": { 00:30:21.474 "state": "completed", 00:30:21.474 "digest": "sha384", 00:30:21.474 "dhgroup": "ffdhe2048" 00:30:21.474 } 00:30:21.474 } 00:30:21.474 ]' 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:21.474 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:21.734 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:21.734 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:22.301 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:22.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:22.301 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:22.301 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.301 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:22.301 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.301 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:22.301 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:22.301 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:22.301 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:22.559 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:22.560 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:22.818 00:30:22.818 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:22.818 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:22.818 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:23.077 { 00:30:23.077 "cntlid": 65, 00:30:23.077 "qid": 0, 00:30:23.077 "state": "enabled", 00:30:23.077 "thread": "nvmf_tgt_poll_group_000", 00:30:23.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:23.077 "listen_address": { 00:30:23.077 "trtype": "TCP", 00:30:23.077 "adrfam": "IPv4", 00:30:23.077 "traddr": "10.0.0.2", 00:30:23.077 "trsvcid": "4420" 00:30:23.077 }, 00:30:23.077 "peer_address": { 00:30:23.077 "trtype": "TCP", 00:30:23.077 "adrfam": "IPv4", 00:30:23.077 "traddr": "10.0.0.1", 00:30:23.077 "trsvcid": "54414" 00:30:23.077 }, 00:30:23.077 "auth": { 00:30:23.077 "state": "completed", 00:30:23.077 "digest": "sha384", 00:30:23.077 "dhgroup": "ffdhe3072" 00:30:23.077 } 00:30:23.077 } 00:30:23.077 ]' 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:23.077 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:23.335 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:23.335 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:23.902 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:23.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:23.902 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:23.902 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.902 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.902 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.902 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:23.902 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:23.902 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:24.159 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:30:24.159 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:24.159 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:24.159 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:24.159 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:24.159 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:24.159 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:24.160 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.160 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.160 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.160 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:24.160 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:24.160 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:24.416 00:30:24.416 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:24.416 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:24.416 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:24.674 { 00:30:24.674 "cntlid": 67, 00:30:24.674 "qid": 0, 00:30:24.674 "state": "enabled", 00:30:24.674 "thread": "nvmf_tgt_poll_group_000", 00:30:24.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:24.674 "listen_address": { 00:30:24.674 "trtype": "TCP", 00:30:24.674 "adrfam": "IPv4", 00:30:24.674 "traddr": "10.0.0.2", 00:30:24.674 "trsvcid": "4420" 00:30:24.674 }, 00:30:24.674 "peer_address": { 00:30:24.674 "trtype": "TCP", 00:30:24.674 "adrfam": "IPv4", 00:30:24.674 "traddr": "10.0.0.1", 00:30:24.674 "trsvcid": "54432" 00:30:24.674 }, 00:30:24.674 "auth": { 00:30:24.674 "state": "completed", 00:30:24.674 "digest": "sha384", 00:30:24.674 "dhgroup": "ffdhe3072" 00:30:24.674 } 00:30:24.674 } 00:30:24.674 ]' 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:24.674 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:24.932 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:24.932 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:25.498 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:25.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:25.498 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:25.498 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.498 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:25.498 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.498 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:25.498 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:25.498 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:25.757 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:26.015 00:30:26.015 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:26.015 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:26.015 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:26.273 { 00:30:26.273 "cntlid": 69, 00:30:26.273 "qid": 0, 00:30:26.273 "state": "enabled", 00:30:26.273 "thread": "nvmf_tgt_poll_group_000", 00:30:26.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:26.273 "listen_address": { 00:30:26.273 "trtype": "TCP", 00:30:26.273 "adrfam": "IPv4", 00:30:26.273 "traddr": "10.0.0.2", 00:30:26.273 "trsvcid": "4420" 00:30:26.273 }, 00:30:26.273 "peer_address": { 00:30:26.273 "trtype": "TCP", 00:30:26.273 "adrfam": "IPv4", 00:30:26.273 "traddr": "10.0.0.1", 00:30:26.273 "trsvcid": "57942" 00:30:26.273 }, 00:30:26.273 "auth": { 00:30:26.273 "state": "completed", 00:30:26.273 "digest": "sha384", 00:30:26.273 "dhgroup": "ffdhe3072" 00:30:26.273 } 00:30:26.273 } 00:30:26.273 ]' 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:26.273 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:26.531 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:26.531 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:27.096 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:27.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:27.096 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:27.096 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.096 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.096 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.096 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:27.096 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:27.096 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:27.354 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:27.612 00:30:27.612 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:27.612 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:27.612 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:27.871 { 00:30:27.871 "cntlid": 71, 00:30:27.871 "qid": 0, 00:30:27.871 "state": "enabled", 00:30:27.871 "thread": "nvmf_tgt_poll_group_000", 00:30:27.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:27.871 "listen_address": { 00:30:27.871 "trtype": "TCP", 00:30:27.871 "adrfam": "IPv4", 00:30:27.871 "traddr": "10.0.0.2", 00:30:27.871 "trsvcid": "4420" 00:30:27.871 }, 00:30:27.871 "peer_address": { 00:30:27.871 "trtype": "TCP", 00:30:27.871 "adrfam": "IPv4", 00:30:27.871 "traddr": "10.0.0.1", 00:30:27.871 "trsvcid": "57974" 00:30:27.871 }, 00:30:27.871 "auth": { 00:30:27.871 "state": "completed", 00:30:27.871 "digest": "sha384", 00:30:27.871 "dhgroup": "ffdhe3072" 00:30:27.871 } 00:30:27.871 } 00:30:27.871 ]' 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:27.871 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:28.139 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:28.139 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:28.713 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:28.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:28.713 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:28.713 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.713 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:28.713 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.713 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:28.713 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:28.713 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:28.713 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:28.971 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:28.972 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:29.230 00:30:29.230 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:29.230 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:29.230 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:29.488 { 00:30:29.488 "cntlid": 73, 00:30:29.488 "qid": 0, 00:30:29.488 "state": "enabled", 00:30:29.488 "thread": "nvmf_tgt_poll_group_000", 00:30:29.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:29.488 "listen_address": { 00:30:29.488 "trtype": "TCP", 00:30:29.488 "adrfam": "IPv4", 00:30:29.488 "traddr": "10.0.0.2", 00:30:29.488 "trsvcid": "4420" 00:30:29.488 }, 00:30:29.488 "peer_address": { 00:30:29.488 "trtype": "TCP", 00:30:29.488 "adrfam": "IPv4", 00:30:29.488 "traddr": "10.0.0.1", 00:30:29.488 "trsvcid": "58006" 00:30:29.488 }, 00:30:29.488 "auth": { 00:30:29.488 "state": "completed", 00:30:29.488 "digest": "sha384", 00:30:29.488 "dhgroup": "ffdhe4096" 00:30:29.488 } 00:30:29.488 } 00:30:29.488 ]' 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:29.488 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:29.747 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:29.747 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:30.313 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:30.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:30.313 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:30.313 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.313 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.313 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.313 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:30.313 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:30.313 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:30.571 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:30.830 00:30:30.830 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:30.830 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:30.830 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:31.089 { 00:30:31.089 "cntlid": 75, 00:30:31.089 "qid": 0, 00:30:31.089 "state": "enabled", 00:30:31.089 "thread": "nvmf_tgt_poll_group_000", 00:30:31.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:31.089 "listen_address": { 00:30:31.089 "trtype": "TCP", 00:30:31.089 "adrfam": "IPv4", 00:30:31.089 "traddr": "10.0.0.2", 00:30:31.089 "trsvcid": "4420" 00:30:31.089 }, 00:30:31.089 "peer_address": { 00:30:31.089 "trtype": "TCP", 00:30:31.089 "adrfam": "IPv4", 00:30:31.089 "traddr": "10.0.0.1", 00:30:31.089 "trsvcid": "58012" 00:30:31.089 }, 00:30:31.089 "auth": { 00:30:31.089 "state": "completed", 00:30:31.089 "digest": "sha384", 00:30:31.089 "dhgroup": "ffdhe4096" 00:30:31.089 } 00:30:31.089 } 00:30:31.089 ]' 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:31.089 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:31.348 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:31.348 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:31.967 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:31.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:31.967 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:31.967 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.967 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:31.967 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.967 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:31.967 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:31.967 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:32.225 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:32.483 00:30:32.483 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:32.483 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:32.483 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:32.483 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.483 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:32.483 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.483 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:32.483 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.741 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:32.741 { 00:30:32.741 "cntlid": 77, 00:30:32.741 "qid": 0, 00:30:32.741 "state": "enabled", 00:30:32.741 "thread": "nvmf_tgt_poll_group_000", 00:30:32.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:32.741 "listen_address": { 00:30:32.741 "trtype": "TCP", 00:30:32.741 "adrfam": "IPv4", 00:30:32.741 "traddr": "10.0.0.2", 00:30:32.741 "trsvcid": "4420" 00:30:32.741 }, 00:30:32.741 "peer_address": { 00:30:32.741 "trtype": "TCP", 00:30:32.741 "adrfam": "IPv4", 00:30:32.741 "traddr": "10.0.0.1", 00:30:32.741 "trsvcid": "58046" 00:30:32.741 }, 00:30:32.741 "auth": { 00:30:32.741 "state": "completed", 00:30:32.741 "digest": "sha384", 00:30:32.741 "dhgroup": "ffdhe4096" 00:30:32.741 } 00:30:32.741 } 00:30:32.741 ]' 00:30:32.741 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:32.741 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:32.741 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:32.741 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:32.741 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:32.741 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:32.741 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:32.741 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:32.999 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:32.999 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:33.566 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:33.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:33.566 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:33.566 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.566 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:33.566 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.566 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:33.566 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:33.566 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:33.824 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:34.086 00:30:34.086 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:34.086 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:34.086 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:34.086 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.086 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:34.086 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.086 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.086 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.086 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:34.086 { 00:30:34.086 "cntlid": 79, 00:30:34.086 "qid": 0, 00:30:34.086 "state": "enabled", 00:30:34.086 "thread": "nvmf_tgt_poll_group_000", 00:30:34.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:34.086 "listen_address": { 00:30:34.086 "trtype": "TCP", 00:30:34.086 "adrfam": "IPv4", 00:30:34.086 "traddr": "10.0.0.2", 00:30:34.086 "trsvcid": "4420" 00:30:34.086 }, 00:30:34.086 "peer_address": { 00:30:34.086 "trtype": "TCP", 00:30:34.086 "adrfam": "IPv4", 00:30:34.086 "traddr": "10.0.0.1", 00:30:34.086 "trsvcid": "58088" 00:30:34.086 }, 00:30:34.086 "auth": { 00:30:34.086 "state": "completed", 00:30:34.086 "digest": "sha384", 00:30:34.086 "dhgroup": "ffdhe4096" 00:30:34.086 } 00:30:34.086 } 00:30:34.086 ]' 00:30:34.086 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:34.344 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:34.344 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:34.344 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:34.344 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:34.344 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:34.344 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:34.344 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:34.602 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:34.602 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:35.169 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:35.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:35.169 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:35.169 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.169 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:35.169 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.169 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:35.169 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:35.169 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:35.169 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:35.428 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:35.686 00:30:35.686 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:35.686 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:35.686 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:35.945 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:35.945 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:35.945 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.945 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:35.945 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.945 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:35.945 { 00:30:35.945 "cntlid": 81, 00:30:35.945 "qid": 0, 00:30:35.945 "state": "enabled", 00:30:35.945 "thread": "nvmf_tgt_poll_group_000", 00:30:35.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:35.945 "listen_address": { 00:30:35.945 "trtype": "TCP", 00:30:35.945 "adrfam": "IPv4", 00:30:35.945 "traddr": "10.0.0.2", 00:30:35.945 "trsvcid": "4420" 00:30:35.945 }, 00:30:35.945 "peer_address": { 00:30:35.945 "trtype": "TCP", 00:30:35.945 "adrfam": "IPv4", 00:30:35.945 "traddr": "10.0.0.1", 00:30:35.945 "trsvcid": "58124" 00:30:35.945 }, 00:30:35.945 "auth": { 00:30:35.945 "state": "completed", 00:30:35.945 "digest": "sha384", 00:30:35.945 "dhgroup": "ffdhe6144" 00:30:35.945 } 00:30:35.945 } 00:30:35.945 ]' 00:30:35.945 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:35.945 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:35.945 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:35.945 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:35.945 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:35.945 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:35.945 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:35.945 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:36.203 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:36.203 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:36.771 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:36.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:36.771 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:36.771 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.771 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:36.771 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.771 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:36.771 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:36.771 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:37.030 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:37.288 00:30:37.288 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:37.288 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:37.288 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:37.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:37.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:37.546 { 00:30:37.546 "cntlid": 83, 00:30:37.546 "qid": 0, 00:30:37.546 "state": "enabled", 00:30:37.546 "thread": "nvmf_tgt_poll_group_000", 00:30:37.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:37.546 "listen_address": { 00:30:37.546 "trtype": "TCP", 00:30:37.546 "adrfam": "IPv4", 00:30:37.546 "traddr": "10.0.0.2", 00:30:37.546 "trsvcid": "4420" 00:30:37.546 }, 00:30:37.546 "peer_address": { 00:30:37.546 "trtype": "TCP", 00:30:37.546 "adrfam": "IPv4", 00:30:37.546 "traddr": "10.0.0.1", 00:30:37.546 "trsvcid": "58166" 00:30:37.546 }, 00:30:37.546 "auth": { 00:30:37.547 "state": "completed", 00:30:37.547 "digest": "sha384", 00:30:37.547 "dhgroup": "ffdhe6144" 00:30:37.547 } 00:30:37.547 } 00:30:37.547 ]' 00:30:37.547 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:37.547 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:37.547 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:37.547 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:37.547 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:37.805 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:37.805 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:37.805 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:37.805 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:37.805 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:38.374 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:38.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:38.374 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:38.374 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.374 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:38.374 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.374 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:38.374 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:38.374 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:38.632 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:38.633 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:39.200 00:30:39.200 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:39.200 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:39.201 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:39.201 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.201 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:39.201 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.201 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:39.201 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.201 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:39.201 { 00:30:39.201 "cntlid": 85, 00:30:39.201 "qid": 0, 00:30:39.201 "state": "enabled", 00:30:39.201 "thread": "nvmf_tgt_poll_group_000", 00:30:39.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:39.201 "listen_address": { 00:30:39.201 "trtype": "TCP", 00:30:39.201 "adrfam": "IPv4", 00:30:39.201 "traddr": "10.0.0.2", 00:30:39.201 "trsvcid": "4420" 00:30:39.201 }, 00:30:39.201 "peer_address": { 00:30:39.201 "trtype": "TCP", 00:30:39.201 "adrfam": "IPv4", 00:30:39.201 "traddr": "10.0.0.1", 00:30:39.201 "trsvcid": "58198" 00:30:39.201 }, 00:30:39.201 "auth": { 00:30:39.201 "state": "completed", 00:30:39.201 "digest": "sha384", 00:30:39.201 "dhgroup": "ffdhe6144" 00:30:39.201 } 00:30:39.201 } 00:30:39.201 ]' 00:30:39.201 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:39.201 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:39.201 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:39.460 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:39.460 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:39.460 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:39.460 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:39.460 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:39.718 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:39.718 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:40.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:40.285 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:40.851 00:30:40.851 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:40.851 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:40.851 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:40.851 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.851 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:40.851 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.851 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.851 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.851 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:40.851 { 00:30:40.851 "cntlid": 87, 00:30:40.851 "qid": 0, 00:30:40.851 "state": "enabled", 00:30:40.851 "thread": "nvmf_tgt_poll_group_000", 00:30:40.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:40.851 "listen_address": { 00:30:40.851 "trtype": "TCP", 00:30:40.851 "adrfam": "IPv4", 00:30:40.851 "traddr": "10.0.0.2", 00:30:40.851 "trsvcid": "4420" 00:30:40.851 }, 00:30:40.851 "peer_address": { 00:30:40.851 "trtype": "TCP", 00:30:40.851 "adrfam": "IPv4", 00:30:40.851 "traddr": "10.0.0.1", 00:30:40.851 "trsvcid": "58226" 00:30:40.851 }, 00:30:40.851 "auth": { 00:30:40.851 "state": "completed", 00:30:40.851 "digest": "sha384", 00:30:40.851 "dhgroup": "ffdhe6144" 00:30:40.851 } 00:30:40.851 } 00:30:40.851 ]' 00:30:40.851 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:41.109 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:41.109 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:41.109 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:41.109 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:41.109 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:41.109 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:41.109 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:41.367 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:41.367 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:41.936 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:41.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:41.936 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:41.936 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.936 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:41.936 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.936 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:41.936 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:41.936 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:41.936 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:41.936 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:42.504 00:30:42.504 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:42.504 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:42.504 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:42.765 { 00:30:42.765 "cntlid": 89, 00:30:42.765 "qid": 0, 00:30:42.765 "state": "enabled", 00:30:42.765 "thread": "nvmf_tgt_poll_group_000", 00:30:42.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:42.765 "listen_address": { 00:30:42.765 "trtype": "TCP", 00:30:42.765 "adrfam": "IPv4", 00:30:42.765 "traddr": "10.0.0.2", 00:30:42.765 "trsvcid": "4420" 00:30:42.765 }, 00:30:42.765 "peer_address": { 00:30:42.765 "trtype": "TCP", 00:30:42.765 "adrfam": "IPv4", 00:30:42.765 "traddr": "10.0.0.1", 00:30:42.765 "trsvcid": "58268" 00:30:42.765 }, 00:30:42.765 "auth": { 00:30:42.765 "state": "completed", 00:30:42.765 "digest": "sha384", 00:30:42.765 "dhgroup": "ffdhe8192" 00:30:42.765 } 00:30:42.765 } 00:30:42.765 ]' 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:42.765 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:43.023 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:43.023 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:43.594 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:43.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:43.594 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:43.594 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.595 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:43.595 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.595 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:43.595 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:43.595 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:43.860 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:44.426 00:30:44.426 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:44.426 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:44.426 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:44.426 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.426 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:44.426 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.426 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:44.426 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.426 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:44.426 { 00:30:44.426 "cntlid": 91, 00:30:44.426 "qid": 0, 00:30:44.426 "state": "enabled", 00:30:44.426 "thread": "nvmf_tgt_poll_group_000", 00:30:44.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:44.426 "listen_address": { 00:30:44.426 "trtype": "TCP", 00:30:44.426 "adrfam": "IPv4", 00:30:44.426 "traddr": "10.0.0.2", 00:30:44.426 "trsvcid": "4420" 00:30:44.426 }, 00:30:44.426 "peer_address": { 00:30:44.426 "trtype": "TCP", 00:30:44.426 "adrfam": "IPv4", 00:30:44.426 "traddr": "10.0.0.1", 00:30:44.426 "trsvcid": "58278" 00:30:44.426 }, 00:30:44.426 "auth": { 00:30:44.426 "state": "completed", 00:30:44.426 "digest": "sha384", 00:30:44.426 "dhgroup": "ffdhe8192" 00:30:44.426 } 00:30:44.426 } 00:30:44.426 ]' 00:30:44.685 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:44.685 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:44.685 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:44.685 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:44.685 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:44.685 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:44.685 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:44.685 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:44.944 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:44.944 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:45.511 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:45.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:45.511 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:45.511 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.511 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.511 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.511 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:45.511 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:45.511 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:45.778 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:46.344 00:30:46.344 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:46.345 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:46.345 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:46.345 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.345 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:46.345 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.345 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:46.345 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.345 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:46.345 { 00:30:46.345 "cntlid": 93, 00:30:46.345 "qid": 0, 00:30:46.345 "state": "enabled", 00:30:46.345 "thread": "nvmf_tgt_poll_group_000", 00:30:46.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:46.345 "listen_address": { 00:30:46.345 "trtype": "TCP", 00:30:46.345 "adrfam": "IPv4", 00:30:46.345 "traddr": "10.0.0.2", 00:30:46.345 "trsvcid": "4420" 00:30:46.345 }, 00:30:46.345 "peer_address": { 00:30:46.345 "trtype": "TCP", 00:30:46.345 "adrfam": "IPv4", 00:30:46.345 "traddr": "10.0.0.1", 00:30:46.345 "trsvcid": "39204" 00:30:46.345 }, 00:30:46.345 "auth": { 00:30:46.345 "state": "completed", 00:30:46.345 "digest": "sha384", 00:30:46.345 "dhgroup": "ffdhe8192" 00:30:46.345 } 00:30:46.345 } 00:30:46.345 ]' 00:30:46.345 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:46.603 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:46.603 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:46.603 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:46.603 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:46.603 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:46.603 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:46.603 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:46.861 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:46.862 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:47.431 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:47.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:47.431 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:47.431 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.431 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:47.431 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.431 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:47.431 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:47.431 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:47.689 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:47.993 00:30:47.993 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:47.993 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:47.993 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:48.252 { 00:30:48.252 "cntlid": 95, 00:30:48.252 "qid": 0, 00:30:48.252 "state": "enabled", 00:30:48.252 "thread": "nvmf_tgt_poll_group_000", 00:30:48.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:48.252 "listen_address": { 00:30:48.252 "trtype": "TCP", 00:30:48.252 "adrfam": "IPv4", 00:30:48.252 "traddr": "10.0.0.2", 00:30:48.252 "trsvcid": "4420" 00:30:48.252 }, 00:30:48.252 "peer_address": { 00:30:48.252 "trtype": "TCP", 00:30:48.252 "adrfam": "IPv4", 00:30:48.252 "traddr": "10.0.0.1", 00:30:48.252 "trsvcid": "39228" 00:30:48.252 }, 00:30:48.252 "auth": { 00:30:48.252 "state": "completed", 00:30:48.252 "digest": "sha384", 00:30:48.252 "dhgroup": "ffdhe8192" 00:30:48.252 } 00:30:48.252 } 00:30:48.252 ]' 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:48.252 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:48.511 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:48.511 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:48.511 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:48.511 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:48.511 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:49.078 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:49.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:49.078 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:49.078 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.078 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:49.078 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.078 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:30:49.078 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:49.078 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:49.078 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:49.078 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:49.337 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:49.596 00:30:49.596 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:49.596 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:49.596 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:49.855 { 00:30:49.855 "cntlid": 97, 00:30:49.855 "qid": 0, 00:30:49.855 "state": "enabled", 00:30:49.855 "thread": "nvmf_tgt_poll_group_000", 00:30:49.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:49.855 "listen_address": { 00:30:49.855 "trtype": "TCP", 00:30:49.855 "adrfam": "IPv4", 00:30:49.855 "traddr": "10.0.0.2", 00:30:49.855 "trsvcid": "4420" 00:30:49.855 }, 00:30:49.855 "peer_address": { 00:30:49.855 "trtype": "TCP", 00:30:49.855 "adrfam": "IPv4", 00:30:49.855 "traddr": "10.0.0.1", 00:30:49.855 "trsvcid": "39248" 00:30:49.855 }, 00:30:49.855 "auth": { 00:30:49.855 "state": "completed", 00:30:49.855 "digest": "sha512", 00:30:49.855 "dhgroup": "null" 00:30:49.855 } 00:30:49.855 } 00:30:49.855 ]' 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:30:49.855 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:49.855 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:49.855 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:49.855 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:50.114 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:50.114 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:50.681 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:50.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:50.681 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:50.681 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.681 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.681 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.681 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:50.681 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:50.681 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:50.940 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:51.199 00:30:51.199 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:51.199 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:51.199 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:51.458 { 00:30:51.458 "cntlid": 99, 00:30:51.458 "qid": 0, 00:30:51.458 "state": "enabled", 00:30:51.458 "thread": "nvmf_tgt_poll_group_000", 00:30:51.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:51.458 "listen_address": { 00:30:51.458 "trtype": "TCP", 00:30:51.458 "adrfam": "IPv4", 00:30:51.458 "traddr": "10.0.0.2", 00:30:51.458 "trsvcid": "4420" 00:30:51.458 }, 00:30:51.458 "peer_address": { 00:30:51.458 "trtype": "TCP", 00:30:51.458 "adrfam": "IPv4", 00:30:51.458 "traddr": "10.0.0.1", 00:30:51.458 "trsvcid": "39272" 00:30:51.458 }, 00:30:51.458 "auth": { 00:30:51.458 "state": "completed", 00:30:51.458 "digest": "sha512", 00:30:51.458 "dhgroup": "null" 00:30:51.458 } 00:30:51.458 } 00:30:51.458 ]' 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:51.458 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:51.717 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:51.717 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:52.288 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:52.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:52.288 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:52.288 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.288 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:52.288 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.288 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:52.288 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:52.288 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:52.617 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:52.927 00:30:52.927 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:52.927 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:52.927 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:52.927 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.927 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:52.927 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.927 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:52.927 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.927 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:52.927 { 00:30:52.927 "cntlid": 101, 00:30:52.927 "qid": 0, 00:30:52.927 "state": "enabled", 00:30:52.927 "thread": "nvmf_tgt_poll_group_000", 00:30:52.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:52.927 "listen_address": { 00:30:52.927 "trtype": "TCP", 00:30:52.927 "adrfam": "IPv4", 00:30:52.927 "traddr": "10.0.0.2", 00:30:52.927 "trsvcid": "4420" 00:30:52.927 }, 00:30:52.927 "peer_address": { 00:30:52.927 "trtype": "TCP", 00:30:52.927 "adrfam": "IPv4", 00:30:52.927 "traddr": "10.0.0.1", 00:30:52.927 "trsvcid": "39300" 00:30:52.927 }, 00:30:52.927 "auth": { 00:30:52.927 "state": "completed", 00:30:52.927 "digest": "sha512", 00:30:52.927 "dhgroup": "null" 00:30:52.927 } 00:30:52.927 } 00:30:52.927 ]' 00:30:52.927 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:52.927 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:52.927 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:53.218 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:30:53.218 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:53.218 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:53.218 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:53.218 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:53.218 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:53.218 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:53.823 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:53.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:53.823 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:53.823 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.823 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:53.823 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.823 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:53.823 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:53.823 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:54.081 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:54.339 00:30:54.339 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:54.339 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:54.339 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:54.597 { 00:30:54.597 "cntlid": 103, 00:30:54.597 "qid": 0, 00:30:54.597 "state": "enabled", 00:30:54.597 "thread": "nvmf_tgt_poll_group_000", 00:30:54.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:54.597 "listen_address": { 00:30:54.597 "trtype": "TCP", 00:30:54.597 "adrfam": "IPv4", 00:30:54.597 "traddr": "10.0.0.2", 00:30:54.597 "trsvcid": "4420" 00:30:54.597 }, 00:30:54.597 "peer_address": { 00:30:54.597 "trtype": "TCP", 00:30:54.597 "adrfam": "IPv4", 00:30:54.597 "traddr": "10.0.0.1", 00:30:54.597 "trsvcid": "39320" 00:30:54.597 }, 00:30:54.597 "auth": { 00:30:54.597 "state": "completed", 00:30:54.597 "digest": "sha512", 00:30:54.597 "dhgroup": "null" 00:30:54.597 } 00:30:54.597 } 00:30:54.597 ]' 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:54.597 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:54.855 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:54.855 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:30:55.420 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:55.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:55.420 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.420 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.420 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:55.420 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.420 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:55.420 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:55.420 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:55.420 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:55.678 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:55.937 00:30:55.937 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:55.937 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:55.937 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:56.194 { 00:30:56.194 "cntlid": 105, 00:30:56.194 "qid": 0, 00:30:56.194 "state": "enabled", 00:30:56.194 "thread": "nvmf_tgt_poll_group_000", 00:30:56.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:56.194 "listen_address": { 00:30:56.194 "trtype": "TCP", 00:30:56.194 "adrfam": "IPv4", 00:30:56.194 "traddr": "10.0.0.2", 00:30:56.194 "trsvcid": "4420" 00:30:56.194 }, 00:30:56.194 "peer_address": { 00:30:56.194 "trtype": "TCP", 00:30:56.194 "adrfam": "IPv4", 00:30:56.194 "traddr": "10.0.0.1", 00:30:56.194 "trsvcid": "60004" 00:30:56.194 }, 00:30:56.194 "auth": { 00:30:56.194 "state": "completed", 00:30:56.194 "digest": "sha512", 00:30:56.194 "dhgroup": "ffdhe2048" 00:30:56.194 } 00:30:56.194 } 00:30:56.194 ]' 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:56.194 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:56.453 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:56.453 10:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:30:57.018 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:57.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:57.019 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:57.019 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.019 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:57.019 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.019 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:57.019 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:57.019 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:57.277 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:57.534 00:30:57.534 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:57.534 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:57.534 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:57.791 { 00:30:57.791 "cntlid": 107, 00:30:57.791 "qid": 0, 00:30:57.791 "state": "enabled", 00:30:57.791 "thread": "nvmf_tgt_poll_group_000", 00:30:57.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:57.791 "listen_address": { 00:30:57.791 "trtype": "TCP", 00:30:57.791 "adrfam": "IPv4", 00:30:57.791 "traddr": "10.0.0.2", 00:30:57.791 "trsvcid": "4420" 00:30:57.791 }, 00:30:57.791 "peer_address": { 00:30:57.791 "trtype": "TCP", 00:30:57.791 "adrfam": "IPv4", 00:30:57.791 "traddr": "10.0.0.1", 00:30:57.791 "trsvcid": "60036" 00:30:57.791 }, 00:30:57.791 "auth": { 00:30:57.791 "state": "completed", 00:30:57.791 "digest": "sha512", 00:30:57.791 "dhgroup": "ffdhe2048" 00:30:57.791 } 00:30:57.791 } 00:30:57.791 ]' 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:57.791 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:58.049 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:58.050 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:30:58.616 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:58.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:58.616 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:58.616 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.616 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:58.616 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.616 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:58.616 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:58.616 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:58.873 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:58.874 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:59.132 00:30:59.132 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:59.133 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:59.133 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:59.391 { 00:30:59.391 "cntlid": 109, 00:30:59.391 "qid": 0, 00:30:59.391 "state": "enabled", 00:30:59.391 "thread": "nvmf_tgt_poll_group_000", 00:30:59.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:30:59.391 "listen_address": { 00:30:59.391 "trtype": "TCP", 00:30:59.391 "adrfam": "IPv4", 00:30:59.391 "traddr": "10.0.0.2", 00:30:59.391 "trsvcid": "4420" 00:30:59.391 }, 00:30:59.391 "peer_address": { 00:30:59.391 "trtype": "TCP", 00:30:59.391 "adrfam": "IPv4", 00:30:59.391 "traddr": "10.0.0.1", 00:30:59.391 "trsvcid": "60056" 00:30:59.391 }, 00:30:59.391 "auth": { 00:30:59.391 "state": "completed", 00:30:59.391 "digest": "sha512", 00:30:59.391 "dhgroup": "ffdhe2048" 00:30:59.391 } 00:30:59.391 } 00:30:59.391 ]' 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:59.391 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:59.649 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:30:59.649 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:31:00.215 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:00.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:00.215 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:00.215 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.215 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:00.215 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.215 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:00.215 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:00.216 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:00.474 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:00.732 00:31:00.732 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:00.732 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:00.732 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:00.732 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.991 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:00.991 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.991 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:00.991 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.991 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:00.991 { 00:31:00.991 "cntlid": 111, 00:31:00.991 "qid": 0, 00:31:00.991 "state": "enabled", 00:31:00.991 "thread": "nvmf_tgt_poll_group_000", 00:31:00.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:00.991 "listen_address": { 00:31:00.991 "trtype": "TCP", 00:31:00.991 "adrfam": "IPv4", 00:31:00.991 "traddr": "10.0.0.2", 00:31:00.991 "trsvcid": "4420" 00:31:00.991 }, 00:31:00.991 "peer_address": { 00:31:00.991 "trtype": "TCP", 00:31:00.991 "adrfam": "IPv4", 00:31:00.991 "traddr": "10.0.0.1", 00:31:00.991 "trsvcid": "60076" 00:31:00.991 }, 00:31:00.991 "auth": { 00:31:00.991 "state": "completed", 00:31:00.991 "digest": "sha512", 00:31:00.991 "dhgroup": "ffdhe2048" 00:31:00.991 } 00:31:00.991 } 00:31:00.991 ]' 00:31:00.991 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:00.991 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:00.991 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:00.991 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:31:00.991 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:00.991 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:00.991 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:00.991 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:01.250 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:01.250 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:01.817 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:01.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:01.817 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:01.817 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.817 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:01.817 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.817 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:31:01.817 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:01.817 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:01.817 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:02.077 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:02.336 00:31:02.336 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:02.336 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:02.336 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:02.336 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.336 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:02.336 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.336 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:02.336 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.336 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:02.336 { 00:31:02.336 "cntlid": 113, 00:31:02.336 "qid": 0, 00:31:02.336 "state": "enabled", 00:31:02.336 "thread": "nvmf_tgt_poll_group_000", 00:31:02.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:02.336 "listen_address": { 00:31:02.336 "trtype": "TCP", 00:31:02.336 "adrfam": "IPv4", 00:31:02.336 "traddr": "10.0.0.2", 00:31:02.336 "trsvcid": "4420" 00:31:02.336 }, 00:31:02.336 "peer_address": { 00:31:02.336 "trtype": "TCP", 00:31:02.336 "adrfam": "IPv4", 00:31:02.336 "traddr": "10.0.0.1", 00:31:02.336 "trsvcid": "60110" 00:31:02.336 }, 00:31:02.336 "auth": { 00:31:02.336 "state": "completed", 00:31:02.336 "digest": "sha512", 00:31:02.336 "dhgroup": "ffdhe3072" 00:31:02.336 } 00:31:02.336 } 00:31:02.336 ]' 00:31:02.336 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:02.595 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:02.595 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:02.595 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:02.595 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:02.595 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:02.595 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:02.595 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:02.854 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:31:02.854 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:31:03.421 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:03.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:03.421 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:03.421 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.421 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:03.421 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.421 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:03.421 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:03.421 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:03.680 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:03.939 00:31:03.939 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:03.939 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:03.939 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:03.939 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.939 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:03.939 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.939 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:03.939 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.939 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:03.939 { 00:31:03.939 "cntlid": 115, 00:31:03.939 "qid": 0, 00:31:03.939 "state": "enabled", 00:31:03.939 "thread": "nvmf_tgt_poll_group_000", 00:31:03.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:03.939 "listen_address": { 00:31:03.939 "trtype": "TCP", 00:31:03.939 "adrfam": "IPv4", 00:31:03.939 "traddr": "10.0.0.2", 00:31:03.939 "trsvcid": "4420" 00:31:03.939 }, 00:31:03.939 "peer_address": { 00:31:03.939 "trtype": "TCP", 00:31:03.939 "adrfam": "IPv4", 00:31:03.939 "traddr": "10.0.0.1", 00:31:03.939 "trsvcid": "60144" 00:31:03.939 }, 00:31:03.939 "auth": { 00:31:03.939 "state": "completed", 00:31:03.939 "digest": "sha512", 00:31:03.939 "dhgroup": "ffdhe3072" 00:31:03.939 } 00:31:03.939 } 00:31:03.939 ]' 00:31:03.939 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:04.209 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:04.210 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:04.210 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:04.210 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:04.210 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:04.210 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:04.210 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:04.467 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:31:04.467 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:31:05.034 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:05.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:05.034 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:05.034 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.034 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:05.034 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.034 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:05.034 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:05.034 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:05.294 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:05.552 00:31:05.552 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:05.552 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:05.552 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:05.552 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.552 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:05.552 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.552 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:05.552 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.552 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:05.552 { 00:31:05.552 "cntlid": 117, 00:31:05.552 "qid": 0, 00:31:05.552 "state": "enabled", 00:31:05.552 "thread": "nvmf_tgt_poll_group_000", 00:31:05.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:05.552 "listen_address": { 00:31:05.552 "trtype": "TCP", 00:31:05.552 "adrfam": "IPv4", 00:31:05.552 "traddr": "10.0.0.2", 00:31:05.552 "trsvcid": "4420" 00:31:05.552 }, 00:31:05.552 "peer_address": { 00:31:05.552 "trtype": "TCP", 00:31:05.552 "adrfam": "IPv4", 00:31:05.552 "traddr": "10.0.0.1", 00:31:05.552 "trsvcid": "38446" 00:31:05.552 }, 00:31:05.552 "auth": { 00:31:05.552 "state": "completed", 00:31:05.552 "digest": "sha512", 00:31:05.553 "dhgroup": "ffdhe3072" 00:31:05.553 } 00:31:05.553 } 00:31:05.553 ]' 00:31:05.811 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:05.811 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:05.811 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:05.811 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:05.811 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:05.811 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:05.811 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:05.811 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:06.070 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:31:06.070 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:06.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:06.688 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:06.946 00:31:06.946 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:06.946 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:06.946 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:07.205 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.205 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:07.205 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.205 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:07.205 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.205 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:07.205 { 00:31:07.205 "cntlid": 119, 00:31:07.205 "qid": 0, 00:31:07.205 "state": "enabled", 00:31:07.205 "thread": "nvmf_tgt_poll_group_000", 00:31:07.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:07.205 "listen_address": { 00:31:07.205 "trtype": "TCP", 00:31:07.205 "adrfam": "IPv4", 00:31:07.205 "traddr": "10.0.0.2", 00:31:07.205 "trsvcid": "4420" 00:31:07.205 }, 00:31:07.205 "peer_address": { 00:31:07.205 "trtype": "TCP", 00:31:07.205 "adrfam": "IPv4", 00:31:07.205 "traddr": "10.0.0.1", 00:31:07.205 "trsvcid": "38478" 00:31:07.205 }, 00:31:07.205 "auth": { 00:31:07.205 "state": "completed", 00:31:07.205 "digest": "sha512", 00:31:07.205 "dhgroup": "ffdhe3072" 00:31:07.205 } 00:31:07.205 } 00:31:07.205 ]' 00:31:07.205 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:07.205 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:07.205 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:07.464 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:07.464 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:07.464 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:07.464 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:07.464 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:07.721 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:07.721 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:08.288 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:08.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:08.288 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.288 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:08.289 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:08.548 00:31:08.807 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:08.807 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:08.807 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:08.807 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.807 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:08.807 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.807 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:08.807 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.807 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:08.807 { 00:31:08.807 "cntlid": 121, 00:31:08.807 "qid": 0, 00:31:08.807 "state": "enabled", 00:31:08.807 "thread": "nvmf_tgt_poll_group_000", 00:31:08.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:08.807 "listen_address": { 00:31:08.807 "trtype": "TCP", 00:31:08.807 "adrfam": "IPv4", 00:31:08.807 "traddr": "10.0.0.2", 00:31:08.807 "trsvcid": "4420" 00:31:08.807 }, 00:31:08.807 "peer_address": { 00:31:08.807 "trtype": "TCP", 00:31:08.807 "adrfam": "IPv4", 00:31:08.807 "traddr": "10.0.0.1", 00:31:08.807 "trsvcid": "38516" 00:31:08.807 }, 00:31:08.807 "auth": { 00:31:08.807 "state": "completed", 00:31:08.807 "digest": "sha512", 00:31:08.807 "dhgroup": "ffdhe4096" 00:31:08.807 } 00:31:08.807 } 00:31:08.807 ]' 00:31:08.807 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:09.066 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:09.066 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:09.066 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:09.066 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:09.066 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:09.066 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:09.066 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:09.326 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:31:09.326 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:31:09.895 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:09.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:09.895 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:09.895 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.895 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.895 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.895 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:09.895 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:09.895 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:09.895 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:31:09.895 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:09.895 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:09.895 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:31:09.895 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:31:09.895 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:09.895 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:09.895 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.895 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:10.154 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.154 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:10.155 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:10.155 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:10.414 00:31:10.414 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:10.414 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:10.414 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:10.414 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.414 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:10.414 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.414 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:10.414 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.414 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:10.414 { 00:31:10.414 "cntlid": 123, 00:31:10.414 "qid": 0, 00:31:10.414 "state": "enabled", 00:31:10.414 "thread": "nvmf_tgt_poll_group_000", 00:31:10.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:10.414 "listen_address": { 00:31:10.414 "trtype": "TCP", 00:31:10.414 "adrfam": "IPv4", 00:31:10.414 "traddr": "10.0.0.2", 00:31:10.414 "trsvcid": "4420" 00:31:10.414 }, 00:31:10.414 "peer_address": { 00:31:10.414 "trtype": "TCP", 00:31:10.414 "adrfam": "IPv4", 00:31:10.414 "traddr": "10.0.0.1", 00:31:10.414 "trsvcid": "38538" 00:31:10.414 }, 00:31:10.414 "auth": { 00:31:10.414 "state": "completed", 00:31:10.414 "digest": "sha512", 00:31:10.414 "dhgroup": "ffdhe4096" 00:31:10.414 } 00:31:10.414 } 00:31:10.414 ]' 00:31:10.414 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:10.673 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:10.673 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:10.673 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:10.673 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:10.673 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:10.673 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:10.673 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:10.932 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:31:10.932 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:11.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:31:11.499 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:11.757 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:11.757 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.757 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.757 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.757 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:11.757 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:11.758 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:12.016 00:31:12.016 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:12.016 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:12.016 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:12.016 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.016 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:12.016 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.016 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:12.016 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.016 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:12.016 { 00:31:12.016 "cntlid": 125, 00:31:12.016 "qid": 0, 00:31:12.016 "state": "enabled", 00:31:12.016 "thread": "nvmf_tgt_poll_group_000", 00:31:12.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:12.016 "listen_address": { 00:31:12.016 "trtype": "TCP", 00:31:12.016 "adrfam": "IPv4", 00:31:12.016 "traddr": "10.0.0.2", 00:31:12.016 "trsvcid": "4420" 00:31:12.016 }, 00:31:12.016 "peer_address": { 00:31:12.016 "trtype": "TCP", 00:31:12.016 "adrfam": "IPv4", 00:31:12.016 "traddr": "10.0.0.1", 00:31:12.016 "trsvcid": "38556" 00:31:12.016 }, 00:31:12.016 "auth": { 00:31:12.016 "state": "completed", 00:31:12.016 "digest": "sha512", 00:31:12.016 "dhgroup": "ffdhe4096" 00:31:12.016 } 00:31:12.016 } 00:31:12.016 ]' 00:31:12.016 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:12.275 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:12.275 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:12.275 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:12.275 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:12.275 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:12.275 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:12.275 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:12.548 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:31:12.548 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:13.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:13.116 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:13.374 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:13.632 { 00:31:13.632 "cntlid": 127, 00:31:13.632 "qid": 0, 00:31:13.632 "state": "enabled", 00:31:13.632 "thread": "nvmf_tgt_poll_group_000", 00:31:13.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:13.632 "listen_address": { 00:31:13.632 "trtype": "TCP", 00:31:13.632 "adrfam": "IPv4", 00:31:13.632 "traddr": "10.0.0.2", 00:31:13.632 "trsvcid": "4420" 00:31:13.632 }, 00:31:13.632 "peer_address": { 00:31:13.632 "trtype": "TCP", 00:31:13.632 "adrfam": "IPv4", 00:31:13.632 "traddr": "10.0.0.1", 00:31:13.632 "trsvcid": "38574" 00:31:13.632 }, 00:31:13.632 "auth": { 00:31:13.632 "state": "completed", 00:31:13.632 "digest": "sha512", 00:31:13.632 "dhgroup": "ffdhe4096" 00:31:13.632 } 00:31:13.632 } 00:31:13.632 ]' 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:13.632 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:13.890 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:13.890 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:13.890 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:13.890 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:13.890 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:13.890 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:13.890 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:14.464 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:14.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:31:14.723 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:14.724 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:14.724 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.724 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:14.724 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.724 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:14.724 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:14.724 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.291 00:31:15.291 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:15.291 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:15.291 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:15.291 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.291 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:15.291 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.291 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:15.291 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.291 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:15.291 { 00:31:15.291 "cntlid": 129, 00:31:15.291 "qid": 0, 00:31:15.291 "state": "enabled", 00:31:15.291 "thread": "nvmf_tgt_poll_group_000", 00:31:15.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:15.291 "listen_address": { 00:31:15.291 "trtype": "TCP", 00:31:15.291 "adrfam": "IPv4", 00:31:15.291 "traddr": "10.0.0.2", 00:31:15.291 "trsvcid": "4420" 00:31:15.291 }, 00:31:15.291 "peer_address": { 00:31:15.291 "trtype": "TCP", 00:31:15.291 "adrfam": "IPv4", 00:31:15.291 "traddr": "10.0.0.1", 00:31:15.291 "trsvcid": "38604" 00:31:15.291 }, 00:31:15.291 "auth": { 00:31:15.291 "state": "completed", 00:31:15.291 "digest": "sha512", 00:31:15.291 "dhgroup": "ffdhe6144" 00:31:15.291 } 00:31:15.291 } 00:31:15.291 ]' 00:31:15.291 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:15.547 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:15.547 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:15.547 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:15.547 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:15.547 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:15.547 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:15.548 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:15.806 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:31:15.806 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:31:16.372 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:16.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:16.372 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:16.372 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.372 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:16.373 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.373 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:16.373 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:16.373 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:16.631 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:16.889 00:31:16.889 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:16.889 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:16.889 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:17.147 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.147 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:17.147 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.147 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:17.147 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.147 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:17.147 { 00:31:17.147 "cntlid": 131, 00:31:17.147 "qid": 0, 00:31:17.147 "state": "enabled", 00:31:17.147 "thread": "nvmf_tgt_poll_group_000", 00:31:17.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:17.147 "listen_address": { 00:31:17.147 "trtype": "TCP", 00:31:17.147 "adrfam": "IPv4", 00:31:17.147 "traddr": "10.0.0.2", 00:31:17.147 "trsvcid": "4420" 00:31:17.147 }, 00:31:17.147 "peer_address": { 00:31:17.147 "trtype": "TCP", 00:31:17.147 "adrfam": "IPv4", 00:31:17.147 "traddr": "10.0.0.1", 00:31:17.147 "trsvcid": "38208" 00:31:17.147 }, 00:31:17.147 "auth": { 00:31:17.147 "state": "completed", 00:31:17.147 "digest": "sha512", 00:31:17.147 "dhgroup": "ffdhe6144" 00:31:17.147 } 00:31:17.147 } 00:31:17.147 ]' 00:31:17.148 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:17.148 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:17.148 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:17.148 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:17.148 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:17.148 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:17.148 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:17.148 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:17.406 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:31:17.406 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:31:17.973 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:17.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:17.973 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:17.973 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.973 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:17.973 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.973 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:17.973 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:17.973 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:18.248 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:18.508 00:31:18.508 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:18.508 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:18.508 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:18.767 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.767 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:18.767 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.767 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:18.767 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.767 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:18.767 { 00:31:18.767 "cntlid": 133, 00:31:18.767 "qid": 0, 00:31:18.767 "state": "enabled", 00:31:18.767 "thread": "nvmf_tgt_poll_group_000", 00:31:18.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:18.767 "listen_address": { 00:31:18.767 "trtype": "TCP", 00:31:18.767 "adrfam": "IPv4", 00:31:18.767 "traddr": "10.0.0.2", 00:31:18.767 "trsvcid": "4420" 00:31:18.767 }, 00:31:18.767 "peer_address": { 00:31:18.767 "trtype": "TCP", 00:31:18.767 "adrfam": "IPv4", 00:31:18.767 "traddr": "10.0.0.1", 00:31:18.767 "trsvcid": "38242" 00:31:18.767 }, 00:31:18.767 "auth": { 00:31:18.767 "state": "completed", 00:31:18.767 "digest": "sha512", 00:31:18.767 "dhgroup": "ffdhe6144" 00:31:18.767 } 00:31:18.767 } 00:31:18.767 ]' 00:31:18.767 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:18.767 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:18.768 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:18.768 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:18.768 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:19.027 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:19.027 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:19.027 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:19.027 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:31:19.027 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:31:19.595 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:19.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:19.595 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:19.595 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.595 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:19.855 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:20.423 00:31:20.423 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:20.423 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:20.423 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:20.423 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.423 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:20.423 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.423 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.423 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.423 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:20.423 { 00:31:20.423 "cntlid": 135, 00:31:20.423 "qid": 0, 00:31:20.423 "state": "enabled", 00:31:20.423 "thread": "nvmf_tgt_poll_group_000", 00:31:20.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:20.423 "listen_address": { 00:31:20.423 "trtype": "TCP", 00:31:20.423 "adrfam": "IPv4", 00:31:20.423 "traddr": "10.0.0.2", 00:31:20.423 "trsvcid": "4420" 00:31:20.423 }, 00:31:20.423 "peer_address": { 00:31:20.423 "trtype": "TCP", 00:31:20.423 "adrfam": "IPv4", 00:31:20.423 "traddr": "10.0.0.1", 00:31:20.423 "trsvcid": "38272" 00:31:20.423 }, 00:31:20.423 "auth": { 00:31:20.423 "state": "completed", 00:31:20.423 "digest": "sha512", 00:31:20.423 "dhgroup": "ffdhe6144" 00:31:20.423 } 00:31:20.423 } 00:31:20.423 ]' 00:31:20.423 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:20.682 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:20.682 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:20.682 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:20.682 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:20.682 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:20.682 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:20.682 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:20.940 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:20.940 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:21.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:21.507 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:22.074 00:31:22.074 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:22.074 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:22.074 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:22.346 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.346 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:22.346 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.346 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:22.346 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.346 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:22.346 { 00:31:22.346 "cntlid": 137, 00:31:22.346 "qid": 0, 00:31:22.346 "state": "enabled", 00:31:22.346 "thread": "nvmf_tgt_poll_group_000", 00:31:22.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:22.346 "listen_address": { 00:31:22.346 "trtype": "TCP", 00:31:22.346 "adrfam": "IPv4", 00:31:22.346 "traddr": "10.0.0.2", 00:31:22.347 "trsvcid": "4420" 00:31:22.347 }, 00:31:22.347 "peer_address": { 00:31:22.347 "trtype": "TCP", 00:31:22.347 "adrfam": "IPv4", 00:31:22.347 "traddr": "10.0.0.1", 00:31:22.347 "trsvcid": "38300" 00:31:22.347 }, 00:31:22.347 "auth": { 00:31:22.347 "state": "completed", 00:31:22.347 "digest": "sha512", 00:31:22.347 "dhgroup": "ffdhe8192" 00:31:22.347 } 00:31:22.347 } 00:31:22.347 ]' 00:31:22.347 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:22.347 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:22.347 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:22.347 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:22.347 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:22.347 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:22.347 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:22.347 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:22.636 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:31:22.636 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:31:23.229 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:23.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:23.229 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:23.229 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.229 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.229 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.229 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:23.229 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:23.229 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:23.487 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:24.054 00:31:24.054 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:24.054 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:24.054 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:24.054 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.054 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:24.054 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.054 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:24.054 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.054 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:24.054 { 00:31:24.054 "cntlid": 139, 00:31:24.054 "qid": 0, 00:31:24.054 "state": "enabled", 00:31:24.054 "thread": "nvmf_tgt_poll_group_000", 00:31:24.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:24.054 "listen_address": { 00:31:24.054 "trtype": "TCP", 00:31:24.054 "adrfam": "IPv4", 00:31:24.054 "traddr": "10.0.0.2", 00:31:24.054 "trsvcid": "4420" 00:31:24.054 }, 00:31:24.054 "peer_address": { 00:31:24.054 "trtype": "TCP", 00:31:24.054 "adrfam": "IPv4", 00:31:24.054 "traddr": "10.0.0.1", 00:31:24.054 "trsvcid": "38322" 00:31:24.054 }, 00:31:24.054 "auth": { 00:31:24.054 "state": "completed", 00:31:24.054 "digest": "sha512", 00:31:24.054 "dhgroup": "ffdhe8192" 00:31:24.054 } 00:31:24.054 } 00:31:24.054 ]' 00:31:24.054 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:24.312 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:24.312 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:24.312 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:24.312 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:24.312 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:24.312 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:24.312 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:24.570 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:31:24.570 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: --dhchap-ctrl-secret DHHC-1:02:ZGVlOWFlZmI0NWM4MDE1ZTk3Yzk0YWY4MTA0ODJmYWY2MDFiZjVmMTUwMjM2OGQwLxGZYQ==: 00:31:25.137 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:25.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:25.137 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:25.137 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.137 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:25.137 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.137 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:25.137 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:25.137 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:25.395 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:25.653 00:31:25.912 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:25.912 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:25.913 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:25.913 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.913 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:25.913 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.913 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:25.913 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.913 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:25.913 { 00:31:25.913 "cntlid": 141, 00:31:25.913 "qid": 0, 00:31:25.913 "state": "enabled", 00:31:25.913 "thread": "nvmf_tgt_poll_group_000", 00:31:25.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:25.913 "listen_address": { 00:31:25.913 "trtype": "TCP", 00:31:25.913 "adrfam": "IPv4", 00:31:25.913 "traddr": "10.0.0.2", 00:31:25.913 "trsvcid": "4420" 00:31:25.913 }, 00:31:25.913 "peer_address": { 00:31:25.913 "trtype": "TCP", 00:31:25.913 "adrfam": "IPv4", 00:31:25.913 "traddr": "10.0.0.1", 00:31:25.913 "trsvcid": "37698" 00:31:25.913 }, 00:31:25.913 "auth": { 00:31:25.913 "state": "completed", 00:31:25.913 "digest": "sha512", 00:31:25.913 "dhgroup": "ffdhe8192" 00:31:25.913 } 00:31:25.913 } 00:31:25.913 ]' 00:31:25.913 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:25.913 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:25.913 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:26.171 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:26.171 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:26.171 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:26.171 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:26.171 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:26.430 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:31:26.430 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:01:Y2M2NjFlNWZlMTViN2RkZmY2YzFjOGU2OTRiMGVkZDN//Cba: 00:31:26.998 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:26.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:26.998 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:26.998 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.998 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:26.998 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.998 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:26.998 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:26.998 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:26.998 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:27.566 00:31:27.566 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:27.566 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:27.566 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:27.824 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.824 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:27.824 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.824 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:27.824 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.824 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:27.824 { 00:31:27.824 "cntlid": 143, 00:31:27.824 "qid": 0, 00:31:27.824 "state": "enabled", 00:31:27.824 "thread": "nvmf_tgt_poll_group_000", 00:31:27.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:27.824 "listen_address": { 00:31:27.824 "trtype": "TCP", 00:31:27.824 "adrfam": "IPv4", 00:31:27.824 "traddr": "10.0.0.2", 00:31:27.824 "trsvcid": "4420" 00:31:27.824 }, 00:31:27.824 "peer_address": { 00:31:27.824 "trtype": "TCP", 00:31:27.824 "adrfam": "IPv4", 00:31:27.824 "traddr": "10.0.0.1", 00:31:27.824 "trsvcid": "37742" 00:31:27.824 }, 00:31:27.824 "auth": { 00:31:27.824 "state": "completed", 00:31:27.824 "digest": "sha512", 00:31:27.824 "dhgroup": "ffdhe8192" 00:31:27.824 } 00:31:27.824 } 00:31:27.824 ]' 00:31:27.824 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:27.824 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:27.825 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:27.825 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:27.825 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:27.825 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:27.825 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:27.825 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:28.084 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:28.084 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:28.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:28.651 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:28.910 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:31:28.910 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:28.910 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:28.911 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:31:28.911 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:31:28.911 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:28.911 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:28.911 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.911 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.911 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.911 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:28.911 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:28.911 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:29.477 00:31:29.477 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:29.477 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:29.477 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:29.736 { 00:31:29.736 "cntlid": 145, 00:31:29.736 "qid": 0, 00:31:29.736 "state": "enabled", 00:31:29.736 "thread": "nvmf_tgt_poll_group_000", 00:31:29.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:29.736 "listen_address": { 00:31:29.736 "trtype": "TCP", 00:31:29.736 "adrfam": "IPv4", 00:31:29.736 "traddr": "10.0.0.2", 00:31:29.736 "trsvcid": "4420" 00:31:29.736 }, 00:31:29.736 "peer_address": { 00:31:29.736 "trtype": "TCP", 00:31:29.736 "adrfam": "IPv4", 00:31:29.736 "traddr": "10.0.0.1", 00:31:29.736 "trsvcid": "37772" 00:31:29.736 }, 00:31:29.736 "auth": { 00:31:29.736 "state": "completed", 00:31:29.736 "digest": "sha512", 00:31:29.736 "dhgroup": "ffdhe8192" 00:31:29.736 } 00:31:29.736 } 00:31:29.736 ]' 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:29.736 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:29.994 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:31:29.994 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZmJhNTdmOTE2M2U4ZmUyMzIzZjVhOTdmYzhiNjExN2Y2MDU3M2FiZDFhODg2Y2M2s9zphA==: --dhchap-ctrl-secret DHHC-1:03:OTQyOGY1ZDJlZDVkYjkzMzdiOGI0NzZlODkwNDI4ZDlkY2Q2ZjY0ZWY5N2MxYmMyM2FhNzdmZGZhY2Y2ZTAyMXo2m+Q=: 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:30.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:31:30.562 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:31:31.130 request: 00:31:31.130 { 00:31:31.130 "name": "nvme0", 00:31:31.130 "trtype": "tcp", 00:31:31.130 "traddr": "10.0.0.2", 00:31:31.130 "adrfam": "ipv4", 00:31:31.130 "trsvcid": "4420", 00:31:31.130 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:31.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:31.130 "prchk_reftag": false, 00:31:31.130 "prchk_guard": false, 00:31:31.130 "hdgst": false, 00:31:31.130 "ddgst": false, 00:31:31.130 "dhchap_key": "key2", 00:31:31.130 "allow_unrecognized_csi": false, 00:31:31.130 "method": "bdev_nvme_attach_controller", 00:31:31.130 "req_id": 1 00:31:31.130 } 00:31:31.130 Got JSON-RPC error response 00:31:31.130 response: 00:31:31.130 { 00:31:31.130 "code": -5, 00:31:31.130 "message": "Input/output error" 00:31:31.130 } 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:31.130 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:31.389 request: 00:31:31.389 { 00:31:31.389 "name": "nvme0", 00:31:31.389 "trtype": "tcp", 00:31:31.389 "traddr": "10.0.0.2", 00:31:31.389 "adrfam": "ipv4", 00:31:31.389 "trsvcid": "4420", 00:31:31.389 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:31.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:31.389 "prchk_reftag": false, 00:31:31.389 "prchk_guard": false, 00:31:31.389 "hdgst": false, 00:31:31.389 "ddgst": false, 00:31:31.389 "dhchap_key": "key1", 00:31:31.389 "dhchap_ctrlr_key": "ckey2", 00:31:31.389 "allow_unrecognized_csi": false, 00:31:31.389 "method": "bdev_nvme_attach_controller", 00:31:31.389 "req_id": 1 00:31:31.389 } 00:31:31.389 Got JSON-RPC error response 00:31:31.389 response: 00:31:31.389 { 00:31:31.389 "code": -5, 00:31:31.389 "message": "Input/output error" 00:31:31.389 } 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.389 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.957 request: 00:31:31.957 { 00:31:31.957 "name": "nvme0", 00:31:31.957 "trtype": "tcp", 00:31:31.957 "traddr": "10.0.0.2", 00:31:31.957 "adrfam": "ipv4", 00:31:31.957 "trsvcid": "4420", 00:31:31.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:31.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:31.957 "prchk_reftag": false, 00:31:31.957 "prchk_guard": false, 00:31:31.957 "hdgst": false, 00:31:31.957 "ddgst": false, 00:31:31.957 "dhchap_key": "key1", 00:31:31.957 "dhchap_ctrlr_key": "ckey1", 00:31:31.957 "allow_unrecognized_csi": false, 00:31:31.957 "method": "bdev_nvme_attach_controller", 00:31:31.957 "req_id": 1 00:31:31.957 } 00:31:31.957 Got JSON-RPC error response 00:31:31.957 response: 00:31:31.957 { 00:31:31.957 "code": -5, 00:31:31.957 "message": "Input/output error" 00:31:31.957 } 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 642367 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 642367 ']' 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 642367 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:31.957 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 642367 00:31:31.958 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:31.958 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:31.958 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 642367' 00:31:31.958 killing process with pid 642367 00:31:31.958 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 642367 00:31:31.958 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 642367 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=664396 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 664396 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 664396 ']' 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.216 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 664396 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 664396 ']' 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.475 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.734 null0 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iAX 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.3tA ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3tA 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ICR 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.l4N ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l4N 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lm9 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.BLh ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BLh 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GMc 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:31:32.734 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.735 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.993 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.993 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:32.993 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:32.994 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:33.560 nvme0n1 00:31:33.560 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:33.560 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:33.560 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:33.819 { 00:31:33.819 "cntlid": 1, 00:31:33.819 "qid": 0, 00:31:33.819 "state": "enabled", 00:31:33.819 "thread": "nvmf_tgt_poll_group_000", 00:31:33.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:33.819 "listen_address": { 00:31:33.819 "trtype": "TCP", 00:31:33.819 "adrfam": "IPv4", 00:31:33.819 "traddr": "10.0.0.2", 00:31:33.819 "trsvcid": "4420" 00:31:33.819 }, 00:31:33.819 "peer_address": { 00:31:33.819 "trtype": "TCP", 00:31:33.819 "adrfam": "IPv4", 00:31:33.819 "traddr": "10.0.0.1", 00:31:33.819 "trsvcid": "37800" 00:31:33.819 }, 00:31:33.819 "auth": { 00:31:33.819 "state": "completed", 00:31:33.819 "digest": "sha512", 00:31:33.819 "dhgroup": "ffdhe8192" 00:31:33.819 } 00:31:33.819 } 00:31:33.819 ]' 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:33.819 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:34.077 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:34.077 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:34.077 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:34.077 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:34.077 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:34.642 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:34.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:34.901 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:34.901 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.901 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.901 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.901 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:31:34.901 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.901 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.901 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.901 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:31:34.901 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:31:34.901 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:31:34.901 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:34.901 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:31:34.901 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:31:34.901 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:34.901 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:31:34.901 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:34.901 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:34.901 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:34.901 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:35.159 request: 00:31:35.159 { 00:31:35.159 "name": "nvme0", 00:31:35.160 "trtype": "tcp", 00:31:35.160 "traddr": "10.0.0.2", 00:31:35.160 "adrfam": "ipv4", 00:31:35.160 "trsvcid": "4420", 00:31:35.160 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:35.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:35.160 "prchk_reftag": false, 00:31:35.160 "prchk_guard": false, 00:31:35.160 "hdgst": false, 00:31:35.160 "ddgst": false, 00:31:35.160 "dhchap_key": "key3", 00:31:35.160 "allow_unrecognized_csi": false, 00:31:35.160 "method": "bdev_nvme_attach_controller", 00:31:35.160 "req_id": 1 00:31:35.160 } 00:31:35.160 Got JSON-RPC error response 00:31:35.160 response: 00:31:35.160 { 00:31:35.160 "code": -5, 00:31:35.160 "message": "Input/output error" 00:31:35.160 } 00:31:35.160 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:35.160 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:35.160 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:35.160 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:35.160 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:31:35.160 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:31:35.160 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:31:35.160 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:31:35.418 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:31:35.418 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:35.418 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:31:35.418 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:31:35.418 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:35.418 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:31:35.418 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:35.418 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:35.418 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:35.418 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:35.677 request: 00:31:35.677 { 00:31:35.677 "name": "nvme0", 00:31:35.677 "trtype": "tcp", 00:31:35.677 "traddr": "10.0.0.2", 00:31:35.677 "adrfam": "ipv4", 00:31:35.677 "trsvcid": "4420", 00:31:35.677 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:35.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:35.677 "prchk_reftag": false, 00:31:35.677 "prchk_guard": false, 00:31:35.677 "hdgst": false, 00:31:35.677 "ddgst": false, 00:31:35.677 "dhchap_key": "key3", 00:31:35.677 "allow_unrecognized_csi": false, 00:31:35.677 "method": "bdev_nvme_attach_controller", 00:31:35.677 "req_id": 1 00:31:35.677 } 00:31:35.677 Got JSON-RPC error response 00:31:35.677 response: 00:31:35.677 { 00:31:35.677 "code": -5, 00:31:35.677 "message": "Input/output error" 00:31:35.677 } 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:35.677 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:36.246 request: 00:31:36.246 { 00:31:36.246 "name": "nvme0", 00:31:36.246 "trtype": "tcp", 00:31:36.246 "traddr": "10.0.0.2", 00:31:36.246 "adrfam": "ipv4", 00:31:36.246 "trsvcid": "4420", 00:31:36.246 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:36.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:36.246 "prchk_reftag": false, 00:31:36.246 "prchk_guard": false, 00:31:36.246 "hdgst": false, 00:31:36.246 "ddgst": false, 00:31:36.246 "dhchap_key": "key0", 00:31:36.246 "dhchap_ctrlr_key": "key1", 00:31:36.246 "allow_unrecognized_csi": false, 00:31:36.246 "method": "bdev_nvme_attach_controller", 00:31:36.246 "req_id": 1 00:31:36.246 } 00:31:36.246 Got JSON-RPC error response 00:31:36.246 response: 00:31:36.246 { 00:31:36.246 "code": -5, 00:31:36.246 "message": "Input/output error" 00:31:36.246 } 00:31:36.246 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:36.246 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:36.246 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:36.246 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:36.246 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:31:36.246 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:31:36.246 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:31:36.505 nvme0n1 00:31:36.505 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:31:36.505 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:31:36.505 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:36.764 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.764 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:36.764 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:36.764 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:31:36.764 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.764 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:36.764 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.764 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:31:36.764 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:31:36.764 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:31:37.701 nvme0n1 00:31:37.701 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:31:37.701 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:31:37.701 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:37.701 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.701 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:37.701 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.701 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:37.960 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.960 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:31:37.960 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:31:37.960 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:37.960 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.960 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:37.960 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: --dhchap-ctrl-secret DHHC-1:03:M2QyNGQ2NjQ2ZDk1MGVhYWNhMzBlNzc2ZWNkMDdiNDkwNzFlYzEyOWI1Y2VhMDBkNTFkNGJlZjUxYWM5Yjk1OIliZrU=: 00:31:38.528 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:31:38.528 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:31:38.528 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:31:38.528 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:31:38.528 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:31:38.528 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:31:38.528 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:31:38.528 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:38.528 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:38.788 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:31:38.788 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:38.788 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:31:38.788 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:31:38.788 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:38.788 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:31:38.788 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:38.788 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:31:38.788 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:31:38.788 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:31:39.356 request: 00:31:39.356 { 00:31:39.356 "name": "nvme0", 00:31:39.356 "trtype": "tcp", 00:31:39.356 "traddr": "10.0.0.2", 00:31:39.356 "adrfam": "ipv4", 00:31:39.356 "trsvcid": "4420", 00:31:39.356 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:39.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:31:39.356 "prchk_reftag": false, 00:31:39.356 "prchk_guard": false, 00:31:39.356 "hdgst": false, 00:31:39.356 "ddgst": false, 00:31:39.356 "dhchap_key": "key1", 00:31:39.356 "allow_unrecognized_csi": false, 00:31:39.356 "method": "bdev_nvme_attach_controller", 00:31:39.356 "req_id": 1 00:31:39.356 } 00:31:39.356 Got JSON-RPC error response 00:31:39.356 response: 00:31:39.356 { 00:31:39.356 "code": -5, 00:31:39.356 "message": "Input/output error" 00:31:39.356 } 00:31:39.356 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:39.356 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:39.356 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:39.356 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:39.356 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:39.356 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:39.356 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:39.926 nvme0n1 00:31:39.926 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:31:39.926 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:31:39.926 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:40.186 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.186 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:40.186 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:40.445 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:40.445 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.445 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:40.445 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.445 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:31:40.445 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:31:40.445 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:31:40.705 nvme0n1 00:31:40.705 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:31:40.705 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:31:40.705 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:40.964 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.964 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:40.964 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: '' 2s 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: ]] 00:31:40.964 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTY3M2E4NTM0NDZhOThmNmE0OTg0MjU1NGFkYzI4YjW0+mfv: 00:31:41.223 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:31:41.223 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:31:41.223 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: 2s 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: ]] 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODdmYTQ3NGNkODA0YTc2Y2RkN2Q0MjAxYzUwODdkNmQ3ODNiNjAyOGVlMDYyMTZm5qP9xQ==: 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:31:43.125 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:31:45.093 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:31:45.093 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:31:45.093 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:31:45.093 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:31:45.093 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:31:45.093 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:31:45.093 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:31:45.093 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:45.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:45.351 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:45.351 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.351 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:45.351 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.351 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:45.351 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:45.351 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:45.917 nvme0n1 00:31:45.917 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:45.917 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.917 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:45.917 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.917 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:45.917 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:46.483 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:31:46.483 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:31:46.483 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:46.743 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.743 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:46.743 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.743 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:46.743 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.743 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:31:46.743 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:31:47.002 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:31:47.002 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:31:47.002 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:31:47.002 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:31:47.569 request: 00:31:47.569 { 00:31:47.569 "name": "nvme0", 00:31:47.569 "dhchap_key": "key1", 00:31:47.569 "dhchap_ctrlr_key": "key3", 00:31:47.569 "method": "bdev_nvme_set_keys", 00:31:47.569 "req_id": 1 00:31:47.569 } 00:31:47.569 Got JSON-RPC error response 00:31:47.569 response: 00:31:47.569 { 00:31:47.569 "code": -13, 00:31:47.569 "message": "Permission denied" 00:31:47.569 } 00:31:47.569 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:47.569 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:47.569 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:47.569 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:47.569 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:31:47.569 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:47.569 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:31:47.829 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:31:47.829 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:31:48.766 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:31:48.766 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:31:48.766 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:49.025 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:31:49.025 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:49.025 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.025 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:49.025 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.025 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:49.025 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:49.025 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:49.589 nvme0n1 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:31:49.848 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:31:50.107 request: 00:31:50.107 { 00:31:50.107 "name": "nvme0", 00:31:50.107 "dhchap_key": "key2", 00:31:50.107 "dhchap_ctrlr_key": "key0", 00:31:50.107 "method": "bdev_nvme_set_keys", 00:31:50.107 "req_id": 1 00:31:50.107 } 00:31:50.107 Got JSON-RPC error response 00:31:50.107 response: 00:31:50.107 { 00:31:50.107 "code": -13, 00:31:50.107 "message": "Permission denied" 00:31:50.107 } 00:31:50.107 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:50.107 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:50.107 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:50.107 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:50.107 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:31:50.107 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:31:50.107 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:50.366 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:31:50.366 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:31:51.300 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:31:51.300 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:31:51.300 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 642392 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 642392 ']' 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 642392 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 642392 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 642392' 00:31:51.560 killing process with pid 642392 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 642392 00:31:51.560 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 642392 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.127 rmmod nvme_tcp 00:31:52.127 rmmod nvme_fabrics 00:31:52.127 rmmod nvme_keyring 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 664396 ']' 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 664396 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 664396 ']' 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 664396 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 664396 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 664396' 00:31:52.127 killing process with pid 664396 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 664396 00:31:52.127 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 664396 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.386 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.289 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.289 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.iAX /tmp/spdk.key-sha256.ICR /tmp/spdk.key-sha384.lm9 /tmp/spdk.key-sha512.GMc /tmp/spdk.key-sha512.3tA /tmp/spdk.key-sha384.l4N /tmp/spdk.key-sha256.BLh '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:31:54.289 00:31:54.289 real 2m32.571s 00:31:54.289 user 5m51.414s 00:31:54.289 sys 0m24.285s 00:31:54.289 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.289 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.289 ************************************ 00:31:54.289 END TEST nvmf_auth_target 00:31:54.289 ************************************ 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # [[ tcp == \t\c\p ]] 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:54.548 ************************************ 00:31:54.548 START TEST nvmf_bdevio_no_huge 00:31:54.548 ************************************ 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:31:54.548 * Looking for test storage... 00:31:54.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:54.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.548 --rc genhtml_branch_coverage=1 00:31:54.548 --rc genhtml_function_coverage=1 00:31:54.548 --rc genhtml_legend=1 00:31:54.548 --rc geninfo_all_blocks=1 00:31:54.548 --rc geninfo_unexecuted_blocks=1 00:31:54.548 00:31:54.548 ' 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:54.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.548 --rc genhtml_branch_coverage=1 00:31:54.548 --rc genhtml_function_coverage=1 00:31:54.548 --rc genhtml_legend=1 00:31:54.548 --rc geninfo_all_blocks=1 00:31:54.548 --rc geninfo_unexecuted_blocks=1 00:31:54.548 00:31:54.548 ' 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:54.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.548 --rc genhtml_branch_coverage=1 00:31:54.548 --rc genhtml_function_coverage=1 00:31:54.548 --rc genhtml_legend=1 00:31:54.548 --rc geninfo_all_blocks=1 00:31:54.548 --rc geninfo_unexecuted_blocks=1 00:31:54.548 00:31:54.548 ' 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:54.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.548 --rc genhtml_branch_coverage=1 00:31:54.548 --rc genhtml_function_coverage=1 00:31:54.548 --rc genhtml_legend=1 00:31:54.548 --rc geninfo_all_blocks=1 00:31:54.548 --rc geninfo_unexecuted_blocks=1 00:31:54.548 00:31:54.548 ' 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.548 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:54.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.549 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:01.128 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:01.128 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:01.129 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:01.129 Found net devices under 0000:86:00.0: cvl_0_0 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:01.129 Found net devices under 0000:86:00.1: cvl_0_1 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:01.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:32:01.129 00:32:01.129 --- 10.0.0.2 ping statistics --- 00:32:01.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.129 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:01.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:32:01.129 00:32:01.129 --- 10.0.0.1 ping statistics --- 00:32:01.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.129 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=671305 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 671305 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 671305 ']' 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:01.129 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:01.129 [2024-12-09 10:43:01.661572] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:32:01.129 [2024-12-09 10:43:01.661621] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:32:01.129 [2024-12-09 10:43:01.855895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:01.129 [2024-12-09 10:43:01.901678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.129 [2024-12-09 10:43:01.901715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.129 [2024-12-09 10:43:01.901722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.129 [2024-12-09 10:43:01.901728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.129 [2024-12-09 10:43:01.901733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.129 [2024-12-09 10:43:01.903020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:01.130 [2024-12-09 10:43:01.903099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:01.130 [2024-12-09 10:43:01.903210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:01.130 [2024-12-09 10:43:01.903211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:01.387 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.387 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:32:01.387 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:01.387 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:01.387 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:01.387 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.388 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:01.388 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.388 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:01.388 [2024-12-09 10:43:02.554124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:01.646 Malloc0 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:01.646 [2024-12-09 10:43:02.590396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:01.646 { 00:32:01.646 "params": { 00:32:01.646 "name": "Nvme$subsystem", 00:32:01.646 "trtype": "$TEST_TRANSPORT", 00:32:01.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:01.646 "adrfam": "ipv4", 00:32:01.646 "trsvcid": "$NVMF_PORT", 00:32:01.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:01.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:01.646 "hdgst": ${hdgst:-false}, 00:32:01.646 "ddgst": ${ddgst:-false} 00:32:01.646 }, 00:32:01.646 "method": "bdev_nvme_attach_controller" 00:32:01.646 } 00:32:01.646 EOF 00:32:01.646 )") 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:32:01.646 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:01.646 "params": { 00:32:01.646 "name": "Nvme1", 00:32:01.646 "trtype": "tcp", 00:32:01.646 "traddr": "10.0.0.2", 00:32:01.646 "adrfam": "ipv4", 00:32:01.646 "trsvcid": "4420", 00:32:01.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:01.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:01.646 "hdgst": false, 00:32:01.646 "ddgst": false 00:32:01.646 }, 00:32:01.646 "method": "bdev_nvme_attach_controller" 00:32:01.646 }' 00:32:01.646 [2024-12-09 10:43:02.642533] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:32:01.646 [2024-12-09 10:43:02.642583] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid671526 ] 00:32:01.905 [2024-12-09 10:43:02.837607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:01.905 [2024-12-09 10:43:02.886237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.905 [2024-12-09 10:43:02.886331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.905 [2024-12-09 10:43:02.886332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:02.164 I/O targets: 00:32:02.164 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:02.164 00:32:02.164 00:32:02.164 CUnit - A unit testing framework for C - Version 2.1-3 00:32:02.164 http://cunit.sourceforge.net/ 00:32:02.164 00:32:02.164 00:32:02.164 Suite: bdevio tests on: Nvme1n1 00:32:02.164 Test: blockdev write read block ...passed 00:32:02.164 Test: blockdev write zeroes read block ...passed 00:32:02.164 Test: blockdev write zeroes read no split ...passed 00:32:02.164 Test: blockdev write zeroes read split ...passed 00:32:02.164 Test: blockdev write zeroes read split partial ...passed 00:32:02.164 Test: blockdev reset ...[2024-12-09 10:43:03.328332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:02.164 [2024-12-09 10:43:03.328396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2a340 (9): Bad file descriptor 00:32:02.424 [2024-12-09 10:43:03.346935] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:02.424 passed 00:32:02.424 Test: blockdev write read 8 blocks ...passed 00:32:02.424 Test: blockdev write read size > 128k ...passed 00:32:02.424 Test: blockdev write read invalid size ...passed 00:32:02.424 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:02.424 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:02.424 Test: blockdev write read max offset ...passed 00:32:02.424 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:02.424 Test: blockdev writev readv 8 blocks ...passed 00:32:02.693 Test: blockdev writev readv 30 x 1block ...passed 00:32:02.693 Test: blockdev writev readv block ...passed 00:32:02.693 Test: blockdev writev readv size > 128k ...passed 00:32:02.693 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:02.693 Test: blockdev comparev and writev ...[2024-12-09 10:43:03.646016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.693 [2024-12-09 10:43:03.646045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.693 [2024-12-09 10:43:03.646063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.693 [2024-12-09 10:43:03.646075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.693 [2024-12-09 10:43:03.646374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.693 [2024-12-09 10:43:03.646394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:02.693 [2024-12-09 10:43:03.646412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.693 [2024-12-09 10:43:03.646424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:02.693 [2024-12-09 10:43:03.646702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.693 [2024-12-09 10:43:03.646714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:02.693 [2024-12-09 10:43:03.646730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.693 [2024-12-09 10:43:03.646743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:02.693 [2024-12-09 10:43:03.647021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.693 [2024-12-09 10:43:03.647034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:02.693 [2024-12-09 10:43:03.647050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.694 [2024-12-09 10:43:03.647061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:02.694 passed 00:32:02.694 Test: blockdev nvme passthru rw ...passed 00:32:02.694 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:43:03.729295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.694 [2024-12-09 10:43:03.729314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:02.694 [2024-12-09 10:43:03.729442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.694 [2024-12-09 10:43:03.729455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:02.694 [2024-12-09 10:43:03.729590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.694 [2024-12-09 10:43:03.729601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:02.694 [2024-12-09 10:43:03.729730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.694 [2024-12-09 10:43:03.729749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:02.694 passed 00:32:02.694 Test: blockdev nvme admin passthru ...passed 00:32:02.694 Test: blockdev copy ...passed 00:32:02.694 00:32:02.694 Run Summary: Type Total Ran Passed Failed Inactive 00:32:02.694 suites 1 1 n/a 0 0 00:32:02.694 tests 23 23 23 0 0 00:32:02.694 asserts 152 152 152 0 n/a 00:32:02.694 00:32:02.694 Elapsed time = 1.245 seconds 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:03.263 rmmod nvme_tcp 00:32:03.263 rmmod nvme_fabrics 00:32:03.263 rmmod nvme_keyring 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 671305 ']' 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 671305 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 671305 ']' 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 671305 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 671305 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 671305' 00:32:03.263 killing process with pid 671305 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 671305 00:32:03.263 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 671305 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.522 10:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.060 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.060 00:32:06.060 real 0m11.166s 00:32:06.060 user 0m14.931s 00:32:06.060 sys 0m5.389s 00:32:06.060 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.060 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:06.060 ************************************ 00:32:06.060 END TEST nvmf_bdevio_no_huge 00:32:06.060 ************************************ 00:32:06.060 10:43:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # '[' tcp = tcp ']' 00:32:06.060 10:43:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:06.061 ************************************ 00:32:06.061 START TEST nvmf_tls 00:32:06.061 ************************************ 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:32:06.061 * Looking for test storage... 00:32:06.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:06.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.061 --rc genhtml_branch_coverage=1 00:32:06.061 --rc genhtml_function_coverage=1 00:32:06.061 --rc genhtml_legend=1 00:32:06.061 --rc geninfo_all_blocks=1 00:32:06.061 --rc geninfo_unexecuted_blocks=1 00:32:06.061 00:32:06.061 ' 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:06.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.061 --rc genhtml_branch_coverage=1 00:32:06.061 --rc genhtml_function_coverage=1 00:32:06.061 --rc genhtml_legend=1 00:32:06.061 --rc geninfo_all_blocks=1 00:32:06.061 --rc geninfo_unexecuted_blocks=1 00:32:06.061 00:32:06.061 ' 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:06.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.061 --rc genhtml_branch_coverage=1 00:32:06.061 --rc genhtml_function_coverage=1 00:32:06.061 --rc genhtml_legend=1 00:32:06.061 --rc geninfo_all_blocks=1 00:32:06.061 --rc geninfo_unexecuted_blocks=1 00:32:06.061 00:32:06.061 ' 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:06.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.061 --rc genhtml_branch_coverage=1 00:32:06.061 --rc genhtml_function_coverage=1 00:32:06.061 --rc genhtml_legend=1 00:32:06.061 --rc geninfo_all_blocks=1 00:32:06.061 --rc geninfo_unexecuted_blocks=1 00:32:06.061 00:32:06.061 ' 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.061 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:06.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:32:06.062 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:32:11.358 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:11.359 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:11.359 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:11.359 Found net devices under 0000:86:00.0: cvl_0_0 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:11.359 Found net devices under 0000:86:00.1: cvl_0_1 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:11.359 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:11.359 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:11.359 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.359 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:11.359 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:11.359 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:11.359 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:11.359 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:11.359 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:11.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:11.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:32:11.359 00:32:11.359 --- 10.0.0.2 ping statistics --- 00:32:11.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.359 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:11.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:11.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:32:11.360 00:32:11.360 --- 10.0.0.1 ping statistics --- 00:32:11.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.360 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=675279 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 675279 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 675279 ']' 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:11.360 [2024-12-09 10:43:12.235380] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:32:11.360 [2024-12-09 10:43:12.235433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.360 [2024-12-09 10:43:12.306411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.360 [2024-12-09 10:43:12.346291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.360 [2024-12-09 10:43:12.346325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.360 [2024-12-09 10:43:12.346332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.360 [2024-12-09 10:43:12.346339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.360 [2024-12-09 10:43:12.346344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.360 [2024-12-09 10:43:12.346889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:32:11.360 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:32:11.618 true 00:32:11.618 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:11.618 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:32:11.876 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:32:11.876 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:32:11.876 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:32:11.876 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:11.876 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:32:12.134 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:32:12.134 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:32:12.134 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:32:12.394 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:12.394 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:32:12.394 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:32:12.394 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:32:12.654 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:12.654 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:32:12.654 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:32:12.654 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:32:12.654 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:32:12.913 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:12.913 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:32:13.172 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:32:13.172 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:32:13.172 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:32:13.172 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:32:13.172 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:13.502 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:32:13.502 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:32:13.502 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.CK9Vs88VgQ 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.sEk3s0CcU4 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.CK9Vs88VgQ 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.sEk3s0CcU4 00:32:13.503 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:32:13.810 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:32:14.068 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.CK9Vs88VgQ 00:32:14.068 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CK9Vs88VgQ 00:32:14.068 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:14.068 [2024-12-09 10:43:15.237150] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.326 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:14.326 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:14.584 [2024-12-09 10:43:15.598055] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:14.584 [2024-12-09 10:43:15.598300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.584 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:32:14.843 malloc0 00:32:14.843 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:14.843 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CK9Vs88VgQ 00:32:15.101 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:32:15.360 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.CK9Vs88VgQ 00:33:46.205 Resuming build at Mon Dec 09 09:44:47 UTC 2024 after Jenkins restart 00:33:52.351 Waiting for reconnection of WFP8 before proceeding with build 00:33:52.648 Timeout set to expire in 41 min 00:33:52.661 Ready to run at Mon Dec 09 09:44:53 UTC 2024 00:33:52.695 Running on WFP26 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:52.736 [Pipeline] { 00:33:52.817 [Pipeline] catchError 00:33:52.833 [Pipeline] { 00:33:52.873 [Pipeline] wrap 00:33:52.902 [Pipeline] { 00:33:52.930 [Pipeline] stage 00:33:52.936 [Pipeline] { (Prologue) 00:33:53.061 [Pipeline] sh 00:33:53.997 + logger -p user.info -t JENKINS-CI 00:33:54.069 [Pipeline] echo 00:33:54.113 Node: WFP26 00:33:54.135 [Pipeline] sh 00:33:54.528 [Pipeline] setCustomBuildProperty 00:33:54.541 [Pipeline] echo 00:33:54.543 Cleanup processes 00:33:54.550 [Pipeline] sh 00:33:54.852 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:54.852 2152905 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:54.869 [Pipeline] sh 00:33:55.162 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:55.162 ++ grep -v 'sudo pgrep' 00:33:55.162 ++ awk '{print $1}' 00:33:55.162 + sudo kill -9 00:33:55.162 + true 00:33:55.178 [Pipeline] cleanWs 00:33:55.190 [WS-CLEANUP] Deleting project workspace... 00:33:55.190 [WS-CLEANUP] Deferred wipeout is used... 00:33:55.202 [WS-CLEANUP] done 00:33:55.208 [Pipeline] setCustomBuildProperty 00:33:55.224 [Pipeline] sh 00:33:55.517 + sudo git config --global --replace-all safe.directory '*' 00:33:55.543 [Pipeline] httpRequest 00:33:57.133 [Pipeline] echo 00:33:57.134 Sorcerer 10.211.164.101 is alive 00:33:57.145 [Pipeline] retry 00:33:57.147 [Pipeline] { 00:33:57.161 [Pipeline] httpRequest 00:33:57.165 HttpMethod: GET 00:33:57.165 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:57.166 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:57.170 Response Code: HTTP/1.1 200 OK 00:33:57.170 Success: Status code 200 is in the accepted range: 200,404 00:33:57.170 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:57.535 [Pipeline] } 00:33:57.554 [Pipeline] // retry 00:33:57.561 [Pipeline] sh 00:33:57.855 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:57.871 [Pipeline] httpRequest 00:33:59.729 [Pipeline] echo 00:33:59.730 Sorcerer 10.211.164.101 is alive 00:33:59.740 [Pipeline] retry 00:33:59.742 [Pipeline] { 00:33:59.757 [Pipeline] httpRequest 00:33:59.761 HttpMethod: GET 00:33:59.762 URL: http://10.211.164.101/packages/spdk_b920049a10f61ff10a17de17284f589f8629ea45.tar.gz 00:33:59.763 Sending request to url: http://10.211.164.101/packages/spdk_b920049a10f61ff10a17de17284f589f8629ea45.tar.gz 00:33:59.772 Response Code: HTTP/1.1 200 OK 00:33:59.772 Success: Status code 200 is in the accepted range: 200,404 00:33:59.772 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b920049a10f61ff10a17de17284f589f8629ea45.tar.gz 00:34:14.007 [Pipeline] } 00:34:14.030 [Pipeline] // retry 00:34:14.039 [Pipeline] sh 00:34:14.339 + tar --no-same-owner -xf spdk_b920049a10f61ff10a17de17284f589f8629ea45.tar.gz 00:34:18.573 [Pipeline] sh 00:34:18.867 + git -C spdk log --oneline -n5 00:34:18.867 b920049a1 env: use 4-KiB memory mapping in no-huge mode 00:34:18.867 070cd5283 env: extend the page table to support 4-KiB mapping 00:34:18.867 6c714c5fe env: add mem_map_fini and vtophys_fini for cleanup 00:34:18.867 b7d7c4b24 env: handle possible DPDK errors in mem_map_init 00:34:18.867 b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:34:18.881 [Pipeline] } 00:34:18.900 [Pipeline] // stage 00:34:18.910 [Pipeline] stage 00:34:18.912 [Pipeline] { (Prepare) 00:34:18.957 [Pipeline] writeFile 00:34:18.978 [Pipeline] sh 00:34:19.271 + logger -p user.info -t JENKINS-CI 00:34:19.288 [Pipeline] sh 00:34:19.582 + logger -p user.info -t JENKINS-CI 00:34:19.596 [Pipeline] sh 00:34:19.886 + cat autorun-spdk.conf 00:34:19.886 SPDK_RUN_FUNCTIONAL_TEST=1 00:34:19.886 SPDK_TEST_NVMF=1 00:34:19.886 SPDK_TEST_NVME_CLI=1 00:34:19.886 SPDK_TEST_NVMF_TRANSPORT=tcp 00:34:19.886 SPDK_TEST_NVMF_NICS=e810 00:34:19.886 SPDK_TEST_VFIOUSER=1 00:34:19.886 SPDK_RUN_UBSAN=1 00:34:19.886 NET_TYPE=phy 00:34:19.895 RUN_NIGHTLY=0 00:34:19.900 [Pipeline] readFile 00:34:19.946 [Pipeline] withEnv 00:34:19.948 [Pipeline] { 00:34:19.964 [Pipeline] sh 00:34:20.257 + set -ex 00:34:20.257 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:34:20.257 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:34:20.257 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:34:20.257 ++ SPDK_TEST_NVMF=1 00:34:20.257 ++ SPDK_TEST_NVME_CLI=1 00:34:20.257 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:34:20.257 ++ SPDK_TEST_NVMF_NICS=e810 00:34:20.257 ++ SPDK_TEST_VFIOUSER=1 00:34:20.257 ++ SPDK_RUN_UBSAN=1 00:34:20.257 ++ NET_TYPE=phy 00:34:20.257 ++ RUN_NIGHTLY=0 00:34:20.257 + case $SPDK_TEST_NVMF_NICS in 00:34:20.257 + DRIVERS=ice 00:34:20.257 + [[ tcp == \r\d\m\a ]] 00:34:20.257 + [[ -n ice ]] 00:34:20.257 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:34:20.257 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:34:20.257 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:34:20.257 rmmod: ERROR: Module i40iw is not currently loaded 00:34:20.257 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:34:20.257 + true 00:34:20.257 + for D in $DRIVERS 00:34:20.257 + sudo modprobe ice 00:34:20.257 + exit 0 00:34:20.269 [Pipeline] } 00:34:20.288 [Pipeline] // withEnv 00:34:20.294 [Pipeline] } 00:34:20.312 [Pipeline] // stage 00:34:20.331 [Pipeline] catchError 00:34:20.334 [Pipeline] { 00:34:20.353 [Pipeline] timeout 00:34:20.354 Timeout set to expire in 1 hr 0 min 00:34:20.356 [Pipeline] { 00:34:20.375 [Pipeline] stage 00:34:20.378 [Pipeline] { (Tests) 00:34:20.396 [Pipeline] sh 00:34:20.689 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:20.689 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:20.689 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:20.689 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:34:20.689 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:20.689 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:34:20.689 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:34:20.689 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:34:20.689 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:34:20.689 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:34:20.689 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:34:20.689 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:20.689 + source /etc/os-release 00:34:20.689 ++ NAME='Fedora Linux' 00:34:20.689 ++ VERSION='39 (Cloud Edition)' 00:34:20.689 ++ ID=fedora 00:34:20.689 ++ VERSION_ID=39 00:34:20.689 ++ VERSION_CODENAME= 00:34:20.689 ++ PLATFORM_ID=platform:f39 00:34:20.689 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:34:20.689 ++ ANSI_COLOR='0;38;2;60;110;180' 00:34:20.690 ++ LOGO=fedora-logo-icon 00:34:20.690 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:34:20.690 ++ HOME_URL=https://fedoraproject.org/ 00:34:20.690 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:34:20.690 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:34:20.690 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:34:20.690 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:34:20.690 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:34:20.690 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:34:20.690 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:34:20.690 ++ SUPPORT_END=2024-11-12 00:34:20.690 ++ VARIANT='Cloud Edition' 00:34:20.690 ++ VARIANT_ID=cloud 00:34:20.690 + uname -a 00:34:20.690 Linux spdk-wfp-26 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:34:20.690 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:34:23.234 Hugepages 00:34:23.234 node hugesize free / total 00:34:23.234 node0 1048576kB 0 / 0 00:34:23.234 node0 2048kB 0 / 0 00:34:23.234 node1 1048576kB 0 / 0 00:34:23.234 node1 2048kB 0 / 0 00:34:23.234 00:34:23.234 Type BDF Vendor Device NUMA Driver Device Block devices 00:34:23.234 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:34:23.234 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:34:23.234 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:34:23.234 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:34:23.234 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:34:23.234 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:34:23.234 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:34:23.234 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:34:23.496 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:34:23.496 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:34:23.496 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:34:23.496 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:34:23.496 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:34:23.496 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:34:23.496 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:34:23.496 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:34:23.496 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:34:23.496 + rm -f /tmp/spdk-ld-path 00:34:23.496 + source autorun-spdk.conf 00:34:23.496 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:34:23.496 ++ SPDK_TEST_NVMF=1 00:34:23.496 ++ SPDK_TEST_NVME_CLI=1 00:34:23.496 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:34:23.496 ++ SPDK_TEST_NVMF_NICS=e810 00:34:23.496 ++ SPDK_TEST_VFIOUSER=1 00:34:23.496 ++ SPDK_RUN_UBSAN=1 00:34:23.496 ++ NET_TYPE=phy 00:34:23.496 ++ RUN_NIGHTLY=0 00:34:23.496 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:34:23.496 + [[ -n '' ]] 00:34:23.496 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:23.496 + for M in /var/spdk/build-*-manifest.txt 00:34:23.496 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:34:23.496 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:34:23.496 + for M in /var/spdk/build-*-manifest.txt 00:34:23.496 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:34:23.496 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:34:23.496 + for M in /var/spdk/build-*-manifest.txt 00:34:23.496 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:34:23.496 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:34:23.496 ++ uname 00:34:23.496 + [[ Linux == \L\i\n\u\x ]] 00:34:23.496 + sudo dmesg -T 00:34:23.496 + sudo dmesg --clear 00:34:23.496 + dmesg_pid=2154355 00:34:23.496 + [[ Fedora Linux == FreeBSD ]] 00:34:23.496 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:23.496 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:23.496 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:34:23.496 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:34:23.496 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:34:23.496 + [[ -x /usr/src/fio-static/fio ]] 00:34:23.496 + sudo dmesg -Tw 00:34:23.496 + export FIO_BIN=/usr/src/fio-static/fio 00:34:23.496 + FIO_BIN=/usr/src/fio-static/fio 00:34:23.496 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:34:23.496 + [[ ! -v VFIO_QEMU_BIN ]] 00:34:23.496 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:34:23.496 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:23.496 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:23.496 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:34:23.496 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:23.496 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:23.496 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:34:23.759 10:45:24 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:34:23.759 10:45:24 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:34:23.759 10:45:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:34:23.759 10:45:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:34:23.759 10:45:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:34:23.759 10:45:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:34:23.759 10:45:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:34:23.759 10:45:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:34:23.759 10:45:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:34:23.759 10:45:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:34:23.759 10:45:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:34:23.759 10:45:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:34:23.759 10:45:24 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:34:23.759 10:45:24 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:34:23.759 10:45:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:23.759 10:45:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:34:23.759 10:45:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:23.759 10:45:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:23.759 10:45:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:23.759 10:45:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.759 10:45:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.759 10:45:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.759 10:45:24 -- paths/export.sh@5 -- $ export PATH 00:34:23.759 10:45:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.759 10:45:24 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:23.759 10:45:24 -- common/autobuild_common.sh@493 -- $ date +%s 00:34:23.759 10:45:24 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733737524.XXXXXX 00:34:23.759 10:45:24 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733737524.IZqcrz 00:34:23.759 10:45:24 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:34:23.759 10:45:24 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:34:23.759 10:45:24 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:23.759 10:45:24 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:23.759 10:45:24 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:23.759 10:45:24 -- common/autobuild_common.sh@509 -- $ get_config_params 00:34:23.759 10:45:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:34:23.759 10:45:24 -- common/autotest_common.sh@10 -- $ set +x 00:34:23.759 10:45:24 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:23.759 10:45:24 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:34:23.759 10:45:24 -- pm/common@17 -- $ local monitor 00:34:23.759 10:45:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:23.759 10:45:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:23.759 10:45:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:23.759 10:45:24 -- pm/common@21 -- $ date +%s 00:34:23.759 10:45:24 -- pm/common@21 -- $ date +%s 00:34:23.759 10:45:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:23.759 10:45:24 -- pm/common@25 -- $ sleep 1 00:34:23.759 10:45:24 -- pm/common@21 -- $ date +%s 00:34:23.759 10:45:24 -- pm/common@21 -- $ date +%s 00:34:23.759 10:45:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733737524 00:34:23.759 10:45:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733737524 00:34:23.759 10:45:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733737524 00:34:23.759 10:45:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733737524 00:34:23.759 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733737524_collect-cpu-load.pm.log 00:34:23.759 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733737524_collect-vmstat.pm.log 00:34:23.759 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733737524_collect-cpu-temp.pm.log 00:34:23.759 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733737524_collect-bmc-pm.bmc.pm.log 00:34:24.711 10:45:25 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:34:24.711 10:45:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:34:24.711 10:45:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:34:24.711 10:45:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:24.711 10:45:25 -- spdk/autobuild.sh@16 -- $ date -u 00:34:24.711 Mon Dec 9 09:45:25 AM UTC 2024 00:34:24.711 10:45:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:34:24.973 v25.01-pre-317-gb920049a1 00:34:24.973 10:45:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:34:24.973 10:45:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:34:24.973 10:45:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:34:24.973 10:45:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:34:24.973 10:45:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:34:24.973 10:45:25 -- common/autotest_common.sh@10 -- $ set +x 00:34:24.973 ************************************ 00:34:24.973 START TEST ubsan 00:34:24.973 ************************************ 00:34:24.973 10:45:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:34:24.973 using ubsan 00:34:24.973 00:34:24.973 real 0m0.001s 00:34:24.973 user 0m0.001s 00:34:24.973 sys 0m0.000s 00:34:24.973 10:45:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:34:24.973 10:45:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:34:24.973 ************************************ 00:34:24.973 END TEST ubsan 00:34:24.973 ************************************ 00:34:24.973 10:45:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:34:24.973 10:45:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:34:24.973 10:45:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:34:24.973 10:45:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:34:24.973 10:45:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:34:24.973 10:45:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:34:24.973 10:45:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:34:24.973 10:45:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:34:24.973 10:45:25 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:34:25.236 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:34:25.236 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:34:25.496 Using 'verbs' RDMA provider 00:34:41.350 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:34:56.253 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:34:56.253 Creating mk/config.mk...done. 00:34:56.253 Creating mk/cc.flags.mk...done. 00:34:56.253 Type 'make' to build. 00:34:56.253 10:45:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:34:56.253 10:45:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:34:56.253 10:45:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:34:56.253 10:45:55 -- common/autotest_common.sh@10 -- $ set +x 00:34:56.253 ************************************ 00:34:56.253 START TEST make 00:34:56.253 ************************************ 00:34:56.253 10:45:55 make -- common/autotest_common.sh@1129 -- $ make -j72 00:34:56.253 make[1]: Nothing to be done for 'all'. 00:34:57.206 The Meson build system 00:34:57.206 Version: 1.5.0 00:34:57.206 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:34:57.206 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:34:57.206 Build type: native build 00:34:57.206 Project name: libvfio-user 00:34:57.206 Project version: 0.0.1 00:34:57.206 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:34:57.206 C linker for the host machine: cc ld.bfd 2.40-14 00:34:57.206 Host machine cpu family: x86_64 00:34:57.206 Host machine cpu: x86_64 00:34:57.206 Run-time dependency threads found: YES 00:34:57.206 Library dl found: YES 00:34:57.206 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:34:57.206 Run-time dependency json-c found: YES 0.17 00:34:57.206 Run-time dependency cmocka found: YES 1.1.7 00:34:57.206 Program pytest-3 found: NO 00:34:57.206 Program flake8 found: NO 00:34:57.206 Program misspell-fixer found: NO 00:34:57.206 Program restructuredtext-lint found: NO 00:34:57.206 Program valgrind found: YES (/usr/bin/valgrind) 00:34:57.206 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:34:57.206 Compiler for C supports arguments -Wmissing-declarations: YES 00:34:57.206 Compiler for C supports arguments -Wwrite-strings: YES 00:34:57.206 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:34:57.206 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:34:57.206 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:34:57.206 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:34:57.206 Build targets in project: 8 00:34:57.206 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:34:57.206 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:34:57.206 00:34:57.206 libvfio-user 0.0.1 00:34:57.206 00:34:57.206 User defined options 00:34:57.206 buildtype : debug 00:34:57.206 default_library: shared 00:34:57.206 libdir : /usr/local/lib 00:34:57.206 00:34:57.206 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:34:58.151 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:34:58.151 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:34:58.151 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:34:58.151 [3/37] Compiling C object samples/null.p/null.c.o 00:34:58.151 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:34:58.151 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:34:58.151 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:34:58.151 [7/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:34:58.151 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:34:58.151 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:34:58.151 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:34:58.151 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:34:58.151 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:34:58.151 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:34:58.151 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:34:58.151 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:34:58.151 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:34:58.151 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:34:58.410 [18/37] Compiling C object samples/server.p/server.c.o 00:34:58.410 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:34:58.410 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:34:58.410 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:34:58.410 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:34:58.410 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:34:58.410 [24/37] Compiling C object samples/client.p/client.c.o 00:34:58.410 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:34:58.410 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:34:58.410 [27/37] Linking target samples/client 00:34:58.410 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:34:58.410 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:34:58.410 [30/37] Linking target test/unit_tests 00:34:58.410 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:34:58.669 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:34:58.669 [33/37] Linking target samples/server 00:34:58.669 [34/37] Linking target samples/null 00:34:58.669 [35/37] Linking target samples/shadow_ioeventfd_server 00:34:58.669 [36/37] Linking target samples/gpio-pci-idio-16 00:34:58.669 [37/37] Linking target samples/lspci 00:34:58.669 INFO: autodetecting backend as ninja 00:34:58.670 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:34:58.670 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:34:59.241 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:34:59.241 ninja: no work to do. 00:35:05.818 The Meson build system 00:35:05.818 Version: 1.5.0 00:35:05.818 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:35:05.818 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:35:05.818 Build type: native build 00:35:05.818 Program cat found: YES (/usr/bin/cat) 00:35:05.818 Project name: DPDK 00:35:05.818 Project version: 24.03.0 00:35:05.818 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:35:05.818 C linker for the host machine: cc ld.bfd 2.40-14 00:35:05.818 Host machine cpu family: x86_64 00:35:05.818 Host machine cpu: x86_64 00:35:05.818 Message: ## Building in Developer Mode ## 00:35:05.818 Program pkg-config found: YES (/usr/bin/pkg-config) 00:35:05.818 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:35:05.818 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:35:05.818 Program python3 found: YES (/usr/bin/python3) 00:35:05.818 Program cat found: YES (/usr/bin/cat) 00:35:05.818 Compiler for C supports arguments -march=native: YES 00:35:05.818 Checking for size of "void *" : 8 00:35:05.818 Checking for size of "void *" : 8 (cached) 00:35:05.818 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:35:05.818 Library m found: YES 00:35:05.818 Library numa found: YES 00:35:05.818 Has header "numaif.h" : YES 00:35:05.818 Library fdt found: NO 00:35:05.818 Library execinfo found: NO 00:35:05.818 Has header "execinfo.h" : YES 00:35:05.818 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:35:05.818 Run-time dependency libarchive found: NO (tried pkgconfig) 00:35:05.818 Run-time dependency libbsd found: NO (tried pkgconfig) 00:35:05.818 Run-time dependency jansson found: NO (tried pkgconfig) 00:35:05.818 Run-time dependency openssl found: YES 3.1.1 00:35:05.818 Run-time dependency libpcap found: YES 1.10.4 00:35:05.818 Has header "pcap.h" with dependency libpcap: YES 00:35:05.818 Compiler for C supports arguments -Wcast-qual: YES 00:35:05.818 Compiler for C supports arguments -Wdeprecated: YES 00:35:05.818 Compiler for C supports arguments -Wformat: YES 00:35:05.818 Compiler for C supports arguments -Wformat-nonliteral: NO 00:35:05.818 Compiler for C supports arguments -Wformat-security: NO 00:35:05.818 Compiler for C supports arguments -Wmissing-declarations: YES 00:35:05.818 Compiler for C supports arguments -Wmissing-prototypes: YES 00:35:05.818 Compiler for C supports arguments -Wnested-externs: YES 00:35:05.818 Compiler for C supports arguments -Wold-style-definition: YES 00:35:05.818 Compiler for C supports arguments -Wpointer-arith: YES 00:35:05.818 Compiler for C supports arguments -Wsign-compare: YES 00:35:05.818 Compiler for C supports arguments -Wstrict-prototypes: YES 00:35:05.818 Compiler for C supports arguments -Wundef: YES 00:35:05.818 Compiler for C supports arguments -Wwrite-strings: YES 00:35:05.818 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:35:05.818 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:35:05.818 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:35:05.818 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:35:05.818 Program objdump found: YES (/usr/bin/objdump) 00:35:05.818 Compiler for C supports arguments -mavx512f: YES 00:35:05.818 Checking if "AVX512 checking" compiles: YES 00:35:05.818 Fetching value of define "__SSE4_2__" : 1 00:35:05.818 Fetching value of define "__AES__" : 1 00:35:05.818 Fetching value of define "__AVX__" : 1 00:35:05.818 Fetching value of define "__AVX2__" : 1 00:35:05.818 Fetching value of define "__AVX512BW__" : 1 00:35:05.818 Fetching value of define "__AVX512CD__" : 1 00:35:05.818 Fetching value of define "__AVX512DQ__" : 1 00:35:05.818 Fetching value of define "__AVX512F__" : 1 00:35:05.818 Fetching value of define "__AVX512VL__" : 1 00:35:05.818 Fetching value of define "__PCLMUL__" : 1 00:35:05.818 Fetching value of define "__RDRND__" : 1 00:35:05.818 Fetching value of define "__RDSEED__" : 1 00:35:05.818 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:35:05.818 Fetching value of define "__znver1__" : (undefined) 00:35:05.818 Fetching value of define "__znver2__" : (undefined) 00:35:05.818 Fetching value of define "__znver3__" : (undefined) 00:35:05.818 Fetching value of define "__znver4__" : (undefined) 00:35:05.818 Compiler for C supports arguments -Wno-format-truncation: YES 00:35:05.818 Message: lib/log: Defining dependency "log" 00:35:05.818 Message: lib/kvargs: Defining dependency "kvargs" 00:35:05.818 Message: lib/telemetry: Defining dependency "telemetry" 00:35:05.818 Checking for function "getentropy" : NO 00:35:05.818 Message: lib/eal: Defining dependency "eal" 00:35:05.818 Message: lib/ring: Defining dependency "ring" 00:35:05.818 Message: lib/rcu: Defining dependency "rcu" 00:35:05.818 Message: lib/mempool: Defining dependency "mempool" 00:35:05.818 Message: lib/mbuf: Defining dependency "mbuf" 00:35:05.818 Fetching value of define "__PCLMUL__" : 1 (cached) 00:35:05.818 Fetching value of define "__AVX512F__" : 1 (cached) 00:35:05.818 Fetching value of define "__AVX512BW__" : 1 (cached) 00:35:05.818 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:35:05.818 Fetching value of define "__AVX512VL__" : 1 (cached) 00:35:05.818 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:35:05.818 Compiler for C supports arguments -mpclmul: YES 00:35:05.818 Compiler for C supports arguments -maes: YES 00:35:05.818 Compiler for C supports arguments -mavx512f: YES (cached) 00:35:05.818 Compiler for C supports arguments -mavx512bw: YES 00:35:05.818 Compiler for C supports arguments -mavx512dq: YES 00:35:05.818 Compiler for C supports arguments -mavx512vl: YES 00:35:05.818 Compiler for C supports arguments -mvpclmulqdq: YES 00:35:05.818 Compiler for C supports arguments -mavx2: YES 00:35:05.818 Compiler for C supports arguments -mavx: YES 00:35:05.818 Message: lib/net: Defining dependency "net" 00:35:05.818 Message: lib/meter: Defining dependency "meter" 00:35:05.818 Message: lib/ethdev: Defining dependency "ethdev" 00:35:05.818 Message: lib/pci: Defining dependency "pci" 00:35:05.818 Message: lib/cmdline: Defining dependency "cmdline" 00:35:05.818 Message: lib/hash: Defining dependency "hash" 00:35:05.818 Message: lib/timer: Defining dependency "timer" 00:35:05.818 Message: lib/compressdev: Defining dependency "compressdev" 00:35:05.818 Message: lib/cryptodev: Defining dependency "cryptodev" 00:35:05.818 Message: lib/dmadev: Defining dependency "dmadev" 00:35:05.818 Compiler for C supports arguments -Wno-cast-qual: YES 00:35:05.818 Message: lib/power: Defining dependency "power" 00:35:05.818 Message: lib/reorder: Defining dependency "reorder" 00:35:05.818 Message: lib/security: Defining dependency "security" 00:35:05.818 Has header "linux/userfaultfd.h" : YES 00:35:05.818 Has header "linux/vduse.h" : YES 00:35:05.818 Message: lib/vhost: Defining dependency "vhost" 00:35:05.818 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:35:05.818 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:35:05.818 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:35:05.818 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:35:05.818 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:35:05.818 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:35:05.818 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:35:05.818 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:35:05.818 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:35:05.818 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:35:05.818 Program doxygen found: YES (/usr/local/bin/doxygen) 00:35:05.818 Configuring doxy-api-html.conf using configuration 00:35:05.818 Configuring doxy-api-man.conf using configuration 00:35:05.818 Program mandb found: YES (/usr/bin/mandb) 00:35:05.818 Program sphinx-build found: NO 00:35:05.818 Configuring rte_build_config.h using configuration 00:35:05.818 Message: 00:35:05.818 ================= 00:35:05.818 Applications Enabled 00:35:05.818 ================= 00:35:05.818 00:35:05.818 apps: 00:35:05.818 00:35:05.818 00:35:05.818 Message: 00:35:05.819 ================= 00:35:05.819 Libraries Enabled 00:35:05.819 ================= 00:35:05.819 00:35:05.819 libs: 00:35:05.819 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:35:05.819 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:35:05.819 cryptodev, dmadev, power, reorder, security, vhost, 00:35:05.819 00:35:05.819 Message: 00:35:05.819 =============== 00:35:05.819 Drivers Enabled 00:35:05.819 =============== 00:35:05.819 00:35:05.819 common: 00:35:05.819 00:35:05.819 bus: 00:35:05.819 pci, vdev, 00:35:05.819 mempool: 00:35:05.819 ring, 00:35:05.819 dma: 00:35:05.819 00:35:05.819 net: 00:35:05.819 00:35:05.819 crypto: 00:35:05.819 00:35:05.819 compress: 00:35:05.819 00:35:05.819 vdpa: 00:35:05.819 00:35:05.819 00:35:05.819 Message: 00:35:05.819 ================= 00:35:05.819 Content Skipped 00:35:05.819 ================= 00:35:05.819 00:35:05.819 apps: 00:35:05.819 dumpcap: explicitly disabled via build config 00:35:05.819 graph: explicitly disabled via build config 00:35:05.819 pdump: explicitly disabled via build config 00:35:05.819 proc-info: explicitly disabled via build config 00:35:05.819 test-acl: explicitly disabled via build config 00:35:05.819 test-bbdev: explicitly disabled via build config 00:35:05.819 test-cmdline: explicitly disabled via build config 00:35:05.819 test-compress-perf: explicitly disabled via build config 00:35:05.819 test-crypto-perf: explicitly disabled via build config 00:35:05.819 test-dma-perf: explicitly disabled via build config 00:35:05.819 test-eventdev: explicitly disabled via build config 00:35:05.819 test-fib: explicitly disabled via build config 00:35:05.819 test-flow-perf: explicitly disabled via build config 00:35:05.819 test-gpudev: explicitly disabled via build config 00:35:05.819 test-mldev: explicitly disabled via build config 00:35:05.819 test-pipeline: explicitly disabled via build config 00:35:05.819 test-pmd: explicitly disabled via build config 00:35:05.819 test-regex: explicitly disabled via build config 00:35:05.819 test-sad: explicitly disabled via build config 00:35:05.819 test-security-perf: explicitly disabled via build config 00:35:05.819 00:35:05.819 libs: 00:35:05.819 argparse: explicitly disabled via build config 00:35:05.819 metrics: explicitly disabled via build config 00:35:05.819 acl: explicitly disabled via build config 00:35:05.819 bbdev: explicitly disabled via build config 00:35:05.819 bitratestats: explicitly disabled via build config 00:35:05.819 bpf: explicitly disabled via build config 00:35:05.819 cfgfile: explicitly disabled via build config 00:35:05.819 distributor: explicitly disabled via build config 00:35:05.819 efd: explicitly disabled via build config 00:35:05.819 eventdev: explicitly disabled via build config 00:35:05.819 dispatcher: explicitly disabled via build config 00:35:05.819 gpudev: explicitly disabled via build config 00:35:05.819 gro: explicitly disabled via build config 00:35:05.819 gso: explicitly disabled via build config 00:35:05.819 ip_frag: explicitly disabled via build config 00:35:05.819 jobstats: explicitly disabled via build config 00:35:05.819 latencystats: explicitly disabled via build config 00:35:05.819 lpm: explicitly disabled via build config 00:35:05.819 member: explicitly disabled via build config 00:35:05.819 pcapng: explicitly disabled via build config 00:35:05.819 rawdev: explicitly disabled via build config 00:35:05.819 regexdev: explicitly disabled via build config 00:35:05.819 mldev: explicitly disabled via build config 00:35:05.819 rib: explicitly disabled via build config 00:35:05.819 sched: explicitly disabled via build config 00:35:05.819 stack: explicitly disabled via build config 00:35:05.819 ipsec: explicitly disabled via build config 00:35:05.819 pdcp: explicitly disabled via build config 00:35:05.819 fib: explicitly disabled via build config 00:35:05.819 port: explicitly disabled via build config 00:35:05.819 pdump: explicitly disabled via build config 00:35:05.819 table: explicitly disabled via build config 00:35:05.819 pipeline: explicitly disabled via build config 00:35:05.819 graph: explicitly disabled via build config 00:35:05.819 node: explicitly disabled via build config 00:35:05.819 00:35:05.819 drivers: 00:35:05.819 common/cpt: not in enabled drivers build config 00:35:05.819 common/dpaax: not in enabled drivers build config 00:35:05.819 common/iavf: not in enabled drivers build config 00:35:05.819 common/idpf: not in enabled drivers build config 00:35:05.819 common/ionic: not in enabled drivers build config 00:35:05.819 common/mvep: not in enabled drivers build config 00:35:05.819 common/octeontx: not in enabled drivers build config 00:35:05.819 bus/auxiliary: not in enabled drivers build config 00:35:05.819 bus/cdx: not in enabled drivers build config 00:35:05.819 bus/dpaa: not in enabled drivers build config 00:35:05.819 bus/fslmc: not in enabled drivers build config 00:35:05.819 bus/ifpga: not in enabled drivers build config 00:35:05.819 bus/platform: not in enabled drivers build config 00:35:05.819 bus/uacce: not in enabled drivers build config 00:35:05.819 bus/vmbus: not in enabled drivers build config 00:35:05.819 common/cnxk: not in enabled drivers build config 00:35:05.819 common/mlx5: not in enabled drivers build config 00:35:05.819 common/nfp: not in enabled drivers build config 00:35:05.819 common/nitrox: not in enabled drivers build config 00:35:05.819 common/qat: not in enabled drivers build config 00:35:05.819 common/sfc_efx: not in enabled drivers build config 00:35:05.819 mempool/bucket: not in enabled drivers build config 00:35:05.819 mempool/cnxk: not in enabled drivers build config 00:35:05.819 mempool/dpaa: not in enabled drivers build config 00:35:05.819 mempool/dpaa2: not in enabled drivers build config 00:35:05.819 mempool/octeontx: not in enabled drivers build config 00:35:05.819 mempool/stack: not in enabled drivers build config 00:35:05.819 dma/cnxk: not in enabled drivers build config 00:35:05.819 dma/dpaa: not in enabled drivers build config 00:35:05.819 dma/dpaa2: not in enabled drivers build config 00:35:05.819 dma/hisilicon: not in enabled drivers build config 00:35:05.819 dma/idxd: not in enabled drivers build config 00:35:05.819 dma/ioat: not in enabled drivers build config 00:35:05.819 dma/skeleton: not in enabled drivers build config 00:35:05.819 net/af_packet: not in enabled drivers build config 00:35:05.819 net/af_xdp: not in enabled drivers build config 00:35:05.819 net/ark: not in enabled drivers build config 00:35:05.819 net/atlantic: not in enabled drivers build config 00:35:05.819 net/avp: not in enabled drivers build config 00:35:05.819 net/axgbe: not in enabled drivers build config 00:35:05.819 net/bnx2x: not in enabled drivers build config 00:35:05.819 net/bnxt: not in enabled drivers build config 00:35:05.819 net/bonding: not in enabled drivers build config 00:35:05.819 net/cnxk: not in enabled drivers build config 00:35:05.819 net/cpfl: not in enabled drivers build config 00:35:05.819 net/cxgbe: not in enabled drivers build config 00:35:05.819 net/dpaa: not in enabled drivers build config 00:35:05.819 net/dpaa2: not in enabled drivers build config 00:35:05.819 net/e1000: not in enabled drivers build config 00:35:05.819 net/ena: not in enabled drivers build config 00:35:05.819 net/enetc: not in enabled drivers build config 00:35:05.819 net/enetfec: not in enabled drivers build config 00:35:05.819 net/enic: not in enabled drivers build config 00:35:05.819 net/failsafe: not in enabled drivers build config 00:35:05.819 net/fm10k: not in enabled drivers build config 00:35:05.819 net/gve: not in enabled drivers build config 00:35:05.819 net/hinic: not in enabled drivers build config 00:35:05.819 net/hns3: not in enabled drivers build config 00:35:05.819 net/i40e: not in enabled drivers build config 00:35:05.819 net/iavf: not in enabled drivers build config 00:35:05.819 net/ice: not in enabled drivers build config 00:35:05.819 net/idpf: not in enabled drivers build config 00:35:05.819 net/igc: not in enabled drivers build config 00:35:05.819 net/ionic: not in enabled drivers build config 00:35:05.819 net/ipn3ke: not in enabled drivers build config 00:35:05.819 net/ixgbe: not in enabled drivers build config 00:35:05.819 net/mana: not in enabled drivers build config 00:35:05.819 net/memif: not in enabled drivers build config 00:35:05.819 net/mlx4: not in enabled drivers build config 00:35:05.819 net/mlx5: not in enabled drivers build config 00:35:05.819 net/mvneta: not in enabled drivers build config 00:35:05.819 net/mvpp2: not in enabled drivers build config 00:35:05.819 net/netvsc: not in enabled drivers build config 00:35:05.819 net/nfb: not in enabled drivers build config 00:35:05.819 net/nfp: not in enabled drivers build config 00:35:05.820 net/ngbe: not in enabled drivers build config 00:35:05.820 net/null: not in enabled drivers build config 00:35:05.820 net/octeontx: not in enabled drivers build config 00:35:05.820 net/octeon_ep: not in enabled drivers build config 00:35:05.820 net/pcap: not in enabled drivers build config 00:35:05.820 net/pfe: not in enabled drivers build config 00:35:05.820 net/qede: not in enabled drivers build config 00:35:05.820 net/ring: not in enabled drivers build config 00:35:05.820 net/sfc: not in enabled drivers build config 00:35:05.820 net/softnic: not in enabled drivers build config 00:35:05.820 net/tap: not in enabled drivers build config 00:35:05.820 net/thunderx: not in enabled drivers build config 00:35:05.820 net/txgbe: not in enabled drivers build config 00:35:05.820 net/vdev_netvsc: not in enabled drivers build config 00:35:05.820 net/vhost: not in enabled drivers build config 00:35:05.820 net/virtio: not in enabled drivers build config 00:35:05.820 net/vmxnet3: not in enabled drivers build config 00:35:05.820 raw/*: missing internal dependency, "rawdev" 00:35:05.820 crypto/armv8: not in enabled drivers build config 00:35:05.820 crypto/bcmfs: not in enabled drivers build config 00:35:05.820 crypto/caam_jr: not in enabled drivers build config 00:35:05.820 crypto/ccp: not in enabled drivers build config 00:35:05.820 crypto/cnxk: not in enabled drivers build config 00:35:05.820 crypto/dpaa_sec: not in enabled drivers build config 00:35:05.820 crypto/dpaa2_sec: not in enabled drivers build config 00:35:05.820 crypto/ipsec_mb: not in enabled drivers build config 00:35:05.820 crypto/mlx5: not in enabled drivers build config 00:35:05.820 crypto/mvsam: not in enabled drivers build config 00:35:05.820 crypto/nitrox: not in enabled drivers build config 00:35:05.820 crypto/null: not in enabled drivers build config 00:35:05.820 crypto/octeontx: not in enabled drivers build config 00:35:05.820 crypto/openssl: not in enabled drivers build config 00:35:05.820 crypto/scheduler: not in enabled drivers build config 00:35:05.820 crypto/uadk: not in enabled drivers build config 00:35:05.820 crypto/virtio: not in enabled drivers build config 00:35:05.820 compress/isal: not in enabled drivers build config 00:35:05.820 compress/mlx5: not in enabled drivers build config 00:35:05.820 compress/nitrox: not in enabled drivers build config 00:35:05.820 compress/octeontx: not in enabled drivers build config 00:35:05.820 compress/zlib: not in enabled drivers build config 00:35:05.820 regex/*: missing internal dependency, "regexdev" 00:35:05.820 ml/*: missing internal dependency, "mldev" 00:35:05.820 vdpa/ifc: not in enabled drivers build config 00:35:05.820 vdpa/mlx5: not in enabled drivers build config 00:35:05.820 vdpa/nfp: not in enabled drivers build config 00:35:05.820 vdpa/sfc: not in enabled drivers build config 00:35:05.820 event/*: missing internal dependency, "eventdev" 00:35:05.820 baseband/*: missing internal dependency, "bbdev" 00:35:05.820 gpu/*: missing internal dependency, "gpudev" 00:35:05.820 00:35:05.820 00:35:05.820 Build targets in project: 85 00:35:05.820 00:35:05.820 DPDK 24.03.0 00:35:05.820 00:35:05.820 User defined options 00:35:05.820 buildtype : debug 00:35:05.820 default_library : shared 00:35:05.820 libdir : lib 00:35:05.820 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:35:05.820 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:35:05.820 c_link_args : 00:35:05.820 cpu_instruction_set: native 00:35:05.820 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:35:05.820 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:35:05.820 enable_docs : false 00:35:05.820 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:35:05.820 enable_kmods : false 00:35:05.820 max_lcores : 128 00:35:05.820 tests : false 00:35:05.820 00:35:05.820 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:35:06.399 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:35:06.399 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:35:06.399 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:35:06.663 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:35:06.663 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:35:06.663 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:35:06.663 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:35:06.663 [7/268] Linking static target lib/librte_kvargs.a 00:35:06.663 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:35:06.663 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:35:06.664 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:35:06.664 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:35:06.664 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:35:06.664 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:35:06.664 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:35:06.664 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:35:06.664 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:35:06.664 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:35:06.664 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:35:06.664 [19/268] Linking static target lib/librte_log.a 00:35:06.938 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:35:06.938 [21/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:35:06.938 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:35:06.938 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:35:07.208 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:35:07.208 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:35:07.208 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:35:07.208 [27/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:35:07.208 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:35:07.208 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:35:07.208 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:35:07.208 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:35:07.208 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:35:07.208 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:35:07.208 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:35:07.209 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:35:07.209 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:35:07.209 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:35:07.209 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:35:07.209 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:35:07.209 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:35:07.209 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:35:07.209 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:35:07.209 [43/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:35:07.209 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:35:07.209 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:35:07.209 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:35:07.209 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:35:07.209 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:35:07.209 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:35:07.209 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:35:07.209 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:35:07.209 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:35:07.209 [53/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:35:07.209 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:35:07.209 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:35:07.209 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:35:07.209 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:35:07.209 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:35:07.209 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:35:07.209 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:35:07.209 [61/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:35:07.209 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:35:07.209 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:35:07.209 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:35:07.209 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:35:07.209 [66/268] Linking static target lib/librte_telemetry.a 00:35:07.209 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:35:07.209 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:35:07.209 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:35:07.209 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:35:07.209 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:35:07.209 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:35:07.209 [73/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:35:07.209 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:35:07.209 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:35:07.209 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:35:07.209 [77/268] Linking static target lib/librte_pci.a 00:35:07.209 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:35:07.209 [79/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:35:07.209 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:35:07.209 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:35:07.209 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:35:07.472 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:35:07.472 [84/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:35:07.472 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:35:07.472 [86/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:35:07.472 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:35:07.472 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:35:07.472 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:35:07.472 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:35:07.472 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:35:07.472 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:35:07.472 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:35:07.472 [94/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:35:07.472 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:35:07.472 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:35:07.472 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:35:07.472 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:35:07.472 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:35:07.472 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:35:07.472 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:35:07.472 [102/268] Linking static target lib/librte_ring.a 00:35:07.472 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:35:07.472 [104/268] Linking static target lib/librte_eal.a 00:35:07.472 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:35:07.472 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:35:07.472 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:35:07.472 [108/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:35:07.472 [109/268] Linking static target lib/librte_mempool.a 00:35:07.472 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:35:07.472 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:35:07.472 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:35:07.472 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:35:07.472 [114/268] Linking static target lib/librte_net.a 00:35:07.472 [115/268] Linking static target lib/librte_rcu.a 00:35:07.741 [116/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:35:07.741 [117/268] Linking static target lib/librte_meter.a 00:35:07.741 [118/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:35:07.741 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:35:07.741 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:35:07.741 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:35:07.741 [122/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:35:07.741 [123/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.001 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:35:08.001 [125/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:35:08.001 [126/268] Linking target lib/librte_log.so.24.1 00:35:08.001 [127/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:35:08.001 [128/268] Linking static target lib/librte_mbuf.a 00:35:08.001 [129/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:35:08.001 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:35:08.001 [131/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:35:08.001 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:35:08.001 [133/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:35:08.001 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:35:08.001 [135/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:35:08.001 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:35:08.001 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:35:08.001 [138/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:35:08.001 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.001 [140/268] Linking static target lib/librte_timer.a 00:35:08.001 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:35:08.001 [142/268] Linking static target lib/librte_cmdline.a 00:35:08.001 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:35:08.001 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:35:08.001 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:35:08.001 [146/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:35:08.001 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:35:08.001 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:35:08.001 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:35:08.001 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:35:08.001 [151/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.001 [152/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:35:08.001 [153/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:35:08.001 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:35:08.001 [155/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.001 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:35:08.001 [157/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.001 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:35:08.001 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:35:08.001 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:35:08.001 [161/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.001 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:35:08.001 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:35:08.001 [164/268] Linking static target lib/librte_compressdev.a 00:35:08.001 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:35:08.001 [166/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:35:08.001 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:35:08.001 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:35:08.001 [169/268] Linking static target lib/librte_dmadev.a 00:35:08.262 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:35:08.262 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:35:08.262 [172/268] Linking target lib/librte_telemetry.so.24.1 00:35:08.262 [173/268] Linking target lib/librte_kvargs.so.24.1 00:35:08.262 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:35:08.262 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:35:08.262 [176/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:35:08.262 [177/268] Linking static target lib/librte_power.a 00:35:08.262 [178/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:35:08.262 [179/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:35:08.262 [180/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:35:08.262 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:35:08.262 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:35:08.262 [183/268] Linking static target lib/librte_security.a 00:35:08.262 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:35:08.262 [185/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:35:08.262 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:35:08.262 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:35:08.262 [188/268] Linking static target lib/librte_reorder.a 00:35:08.262 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:35:08.262 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:35:08.262 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:35:08.262 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:35:08.262 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:35:08.524 [194/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:35:08.524 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:35:08.524 [196/268] Linking static target lib/librte_hash.a 00:35:08.524 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:35:08.524 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:35:08.524 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:35:08.524 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:35:08.524 [201/268] Linking static target drivers/librte_bus_vdev.a 00:35:08.524 [202/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.524 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:35:08.524 [204/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.524 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:35:08.524 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:35:08.524 [207/268] Linking static target drivers/librte_bus_pci.a 00:35:08.524 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:35:08.524 [209/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:35:08.524 [210/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:35:08.791 [211/268] Linking static target lib/librte_cryptodev.a 00:35:08.791 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:35:08.791 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:35:08.791 [214/268] Linking static target drivers/librte_mempool_ring.a 00:35:08.791 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.791 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.791 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.791 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.791 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:08.791 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:09.362 [221/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:35:09.362 [222/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:35:09.362 [223/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:35:09.362 [224/268] Linking static target lib/librte_ethdev.a 00:35:09.362 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:35:09.362 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:35:09.624 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:35:10.568 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:35:10.568 [229/268] Linking static target lib/librte_vhost.a 00:35:10.828 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:13.373 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:35:19.959 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:20.222 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:35:20.483 [234/268] Linking target lib/librte_eal.so.24.1 00:35:20.483 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:35:20.745 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:35:20.745 [237/268] Linking target lib/librte_pci.so.24.1 00:35:20.745 [238/268] Linking target lib/librte_meter.so.24.1 00:35:20.745 [239/268] Linking target lib/librte_ring.so.24.1 00:35:20.745 [240/268] Linking target lib/librte_timer.so.24.1 00:35:20.745 [241/268] Linking target lib/librte_dmadev.so.24.1 00:35:20.745 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:35:20.745 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:35:20.745 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:35:20.745 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:35:20.745 [246/268] Linking target lib/librte_rcu.so.24.1 00:35:20.745 [247/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:35:20.745 [248/268] Linking target lib/librte_mempool.so.24.1 00:35:20.745 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:35:21.007 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:35:21.007 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:35:21.007 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:35:21.007 [253/268] Linking target lib/librte_mbuf.so.24.1 00:35:21.268 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:35:21.268 [255/268] Linking target lib/librte_compressdev.so.24.1 00:35:21.269 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:35:21.269 [257/268] Linking target lib/librte_reorder.so.24.1 00:35:21.269 [258/268] Linking target lib/librte_net.so.24.1 00:35:21.530 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:35:21.530 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:35:21.530 [261/268] Linking target lib/librte_cmdline.so.24.1 00:35:21.530 [262/268] Linking target lib/librte_hash.so.24.1 00:35:21.530 [263/268] Linking target lib/librte_security.so.24.1 00:35:21.530 [264/268] Linking target lib/librte_ethdev.so.24.1 00:35:21.794 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:35:21.794 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:35:21.794 [267/268] Linking target lib/librte_power.so.24.1 00:35:21.794 [268/268] Linking target lib/librte_vhost.so.24.1 00:35:21.794 INFO: autodetecting backend as ninja 00:35:21.794 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 72 00:35:34.033 CC lib/ut_mock/mock.o 00:35:34.034 CC lib/log/log.o 00:35:34.034 CC lib/log/log_flags.o 00:35:34.034 CC lib/log/log_deprecated.o 00:35:34.034 CC lib/ut/ut.o 00:35:34.294 LIB libspdk_ut_mock.a 00:35:34.294 SO libspdk_ut_mock.so.6.0 00:35:34.294 LIB libspdk_log.a 00:35:34.294 SO libspdk_log.so.7.1 00:35:34.294 LIB libspdk_ut.a 00:35:34.294 SYMLINK libspdk_ut_mock.so 00:35:34.294 SO libspdk_ut.so.2.0 00:35:34.555 SYMLINK libspdk_log.so 00:35:34.555 SYMLINK libspdk_ut.so 00:35:34.816 CC lib/ioat/ioat.o 00:35:34.816 CC lib/util/base64.o 00:35:34.816 CC lib/util/bit_array.o 00:35:34.816 CC lib/util/crc16.o 00:35:34.816 CC lib/util/cpuset.o 00:35:34.816 CC lib/util/crc32.o 00:35:34.816 CC lib/dma/dma.o 00:35:34.816 CC lib/util/crc32c.o 00:35:34.816 CC lib/util/crc32_ieee.o 00:35:34.816 CC lib/util/crc64.o 00:35:34.816 CC lib/util/dif.o 00:35:34.816 CC lib/util/fd.o 00:35:34.816 CXX lib/trace_parser/trace.o 00:35:34.816 CC lib/util/fd_group.o 00:35:34.816 CC lib/util/file.o 00:35:34.816 CC lib/util/hexlify.o 00:35:34.816 CC lib/util/pipe.o 00:35:34.816 CC lib/util/math.o 00:35:34.816 CC lib/util/iov.o 00:35:34.816 CC lib/util/net.o 00:35:34.816 CC lib/util/strerror_tls.o 00:35:34.816 CC lib/util/string.o 00:35:34.816 CC lib/util/zipf.o 00:35:34.816 CC lib/util/uuid.o 00:35:34.816 CC lib/util/xor.o 00:35:34.816 CC lib/util/md5.o 00:35:35.077 CC lib/vfio_user/host/vfio_user_pci.o 00:35:35.077 CC lib/vfio_user/host/vfio_user.o 00:35:35.077 LIB libspdk_dma.a 00:35:35.077 SO libspdk_dma.so.5.0 00:35:35.077 LIB libspdk_ioat.a 00:35:35.077 SYMLINK libspdk_dma.so 00:35:35.077 SO libspdk_ioat.so.7.0 00:35:35.338 SYMLINK libspdk_ioat.so 00:35:35.338 LIB libspdk_vfio_user.a 00:35:35.338 SO libspdk_vfio_user.so.5.0 00:35:35.338 SYMLINK libspdk_vfio_user.so 00:35:35.614 LIB libspdk_util.a 00:35:35.614 SO libspdk_util.so.10.1 00:35:35.876 SYMLINK libspdk_util.so 00:35:35.876 LIB libspdk_trace_parser.a 00:35:35.876 SO libspdk_trace_parser.so.6.0 00:35:35.876 SYMLINK libspdk_trace_parser.so 00:35:36.137 CC lib/idxd/idxd.o 00:35:36.137 CC lib/env_dpdk/env.o 00:35:36.137 CC lib/idxd/idxd_kernel.o 00:35:36.137 CC lib/idxd/idxd_user.o 00:35:36.137 CC lib/env_dpdk/init.o 00:35:36.137 CC lib/env_dpdk/memory.o 00:35:36.137 CC lib/env_dpdk/pci.o 00:35:36.137 CC lib/rdma_utils/rdma_utils.o 00:35:36.137 CC lib/env_dpdk/threads.o 00:35:36.137 CC lib/env_dpdk/pci_ioat.o 00:35:36.137 CC lib/vmd/vmd.o 00:35:36.137 CC lib/vmd/led.o 00:35:36.137 CC lib/json/json_parse.o 00:35:36.137 CC lib/env_dpdk/pci_vmd.o 00:35:36.137 CC lib/env_dpdk/pci_virtio.o 00:35:36.137 CC lib/env_dpdk/pci_event.o 00:35:36.137 CC lib/json/json_util.o 00:35:36.137 CC lib/env_dpdk/pci_idxd.o 00:35:36.137 CC lib/json/json_write.o 00:35:36.137 CC lib/env_dpdk/pci_dpdk_2211.o 00:35:36.137 CC lib/env_dpdk/pci_dpdk.o 00:35:36.137 CC lib/env_dpdk/sigbus_handler.o 00:35:36.137 CC lib/env_dpdk/pci_dpdk_2207.o 00:35:36.137 CC lib/conf/conf.o 00:35:36.411 LIB libspdk_conf.a 00:35:36.411 SO libspdk_conf.so.6.0 00:35:36.411 LIB libspdk_rdma_utils.a 00:35:36.411 SO libspdk_rdma_utils.so.1.0 00:35:36.411 LIB libspdk_json.a 00:35:36.411 SYMLINK libspdk_conf.so 00:35:36.683 SO libspdk_json.so.6.0 00:35:36.683 SYMLINK libspdk_rdma_utils.so 00:35:36.683 SYMLINK libspdk_json.so 00:35:36.683 LIB libspdk_idxd.a 00:35:36.973 SO libspdk_idxd.so.12.1 00:35:36.973 LIB libspdk_vmd.a 00:35:36.973 CC lib/rdma_provider/common.o 00:35:36.973 CC lib/rdma_provider/rdma_provider_verbs.o 00:35:36.973 SO libspdk_vmd.so.6.0 00:35:36.973 SYMLINK libspdk_idxd.so 00:35:36.973 CC lib/jsonrpc/jsonrpc_server.o 00:35:36.973 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:35:36.973 CC lib/jsonrpc/jsonrpc_client.o 00:35:36.973 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:35:36.973 SYMLINK libspdk_vmd.so 00:35:37.291 LIB libspdk_rdma_provider.a 00:35:37.291 SO libspdk_rdma_provider.so.7.0 00:35:37.291 LIB libspdk_jsonrpc.a 00:35:37.291 SO libspdk_jsonrpc.so.6.0 00:35:37.291 SYMLINK libspdk_rdma_provider.so 00:35:37.291 SYMLINK libspdk_jsonrpc.so 00:35:37.579 LIB libspdk_env_dpdk.a 00:35:37.579 CC lib/rpc/rpc.o 00:35:37.839 SO libspdk_env_dpdk.so.15.1 00:35:37.839 SYMLINK libspdk_env_dpdk.so 00:35:37.839 LIB libspdk_rpc.a 00:35:37.839 SO libspdk_rpc.so.6.0 00:35:38.099 SYMLINK libspdk_rpc.so 00:35:38.359 CC lib/trace/trace_flags.o 00:35:38.359 CC lib/trace/trace.o 00:35:38.359 CC lib/trace/trace_rpc.o 00:35:38.359 CC lib/keyring/keyring.o 00:35:38.359 CC lib/keyring/keyring_rpc.o 00:35:38.359 CC lib/notify/notify.o 00:35:38.359 CC lib/notify/notify_rpc.o 00:35:38.620 LIB libspdk_notify.a 00:35:38.620 SO libspdk_notify.so.6.0 00:35:38.620 LIB libspdk_keyring.a 00:35:38.620 SYMLINK libspdk_notify.so 00:35:38.620 SO libspdk_keyring.so.2.0 00:35:38.620 LIB libspdk_trace.a 00:35:38.620 SO libspdk_trace.so.11.0 00:35:38.620 SYMLINK libspdk_keyring.so 00:35:38.881 SYMLINK libspdk_trace.so 00:35:39.140 CC lib/sock/sock.o 00:35:39.140 CC lib/sock/sock_rpc.o 00:35:39.140 CC lib/thread/thread.o 00:35:39.140 CC lib/thread/iobuf.o 00:35:39.709 LIB libspdk_sock.a 00:35:39.709 SO libspdk_sock.so.10.0 00:35:39.709 SYMLINK libspdk_sock.so 00:35:40.278 CC lib/nvme/nvme_ctrlr_cmd.o 00:35:40.278 CC lib/nvme/nvme_ctrlr.o 00:35:40.279 CC lib/nvme/nvme_ns.o 00:35:40.279 CC lib/nvme/nvme_pcie_common.o 00:35:40.279 CC lib/nvme/nvme_fabric.o 00:35:40.279 CC lib/nvme/nvme_ns_cmd.o 00:35:40.279 CC lib/nvme/nvme_qpair.o 00:35:40.279 CC lib/nvme/nvme_pcie.o 00:35:40.279 CC lib/nvme/nvme.o 00:35:40.279 CC lib/nvme/nvme_discovery.o 00:35:40.279 CC lib/nvme/nvme_quirks.o 00:35:40.279 CC lib/nvme/nvme_transport.o 00:35:40.279 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:35:40.279 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:35:40.279 CC lib/nvme/nvme_tcp.o 00:35:40.279 CC lib/nvme/nvme_opal.o 00:35:40.279 CC lib/nvme/nvme_io_msg.o 00:35:40.279 CC lib/nvme/nvme_poll_group.o 00:35:40.279 CC lib/nvme/nvme_zns.o 00:35:40.279 CC lib/nvme/nvme_stubs.o 00:35:40.279 CC lib/nvme/nvme_auth.o 00:35:40.279 CC lib/nvme/nvme_cuse.o 00:35:40.279 CC lib/nvme/nvme_vfio_user.o 00:35:40.279 CC lib/nvme/nvme_rdma.o 00:35:40.849 LIB libspdk_thread.a 00:35:40.849 SO libspdk_thread.so.11.0 00:35:40.849 SYMLINK libspdk_thread.so 00:35:41.109 CC lib/virtio/virtio_vfio_user.o 00:35:41.109 CC lib/virtio/virtio_vhost_user.o 00:35:41.109 CC lib/virtio/virtio.o 00:35:41.109 CC lib/virtio/virtio_pci.o 00:35:41.368 CC lib/init/json_config.o 00:35:41.368 CC lib/init/subsystem.o 00:35:41.368 CC lib/vfu_tgt/tgt_endpoint.o 00:35:41.368 CC lib/init/subsystem_rpc.o 00:35:41.368 CC lib/vfu_tgt/tgt_rpc.o 00:35:41.368 CC lib/init/rpc.o 00:35:41.368 CC lib/blob/blobstore.o 00:35:41.368 CC lib/blob/request.o 00:35:41.368 CC lib/blob/zeroes.o 00:35:41.368 CC lib/fsdev/fsdev.o 00:35:41.368 CC lib/blob/blob_bs_dev.o 00:35:41.368 CC lib/fsdev/fsdev_io.o 00:35:41.368 CC lib/fsdev/fsdev_rpc.o 00:35:41.368 CC lib/accel/accel.o 00:35:41.368 CC lib/accel/accel_rpc.o 00:35:41.368 CC lib/accel/accel_sw.o 00:35:41.629 LIB libspdk_init.a 00:35:41.629 SO libspdk_init.so.6.0 00:35:41.629 LIB libspdk_virtio.a 00:35:41.629 LIB libspdk_vfu_tgt.a 00:35:41.629 SO libspdk_virtio.so.7.0 00:35:41.629 SYMLINK libspdk_init.so 00:35:41.629 SO libspdk_vfu_tgt.so.3.0 00:35:41.629 SYMLINK libspdk_virtio.so 00:35:41.889 SYMLINK libspdk_vfu_tgt.so 00:35:41.889 LIB libspdk_fsdev.a 00:35:42.149 LIB libspdk_nvme.a 00:35:42.149 CC lib/event/app.o 00:35:42.149 CC lib/event/reactor.o 00:35:42.149 CC lib/event/scheduler_static.o 00:35:42.149 CC lib/event/log_rpc.o 00:35:42.149 CC lib/event/app_rpc.o 00:35:42.149 SO libspdk_fsdev.so.2.0 00:35:42.149 SYMLINK libspdk_fsdev.so 00:35:42.149 SO libspdk_nvme.so.15.0 00:35:42.408 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:35:42.408 LIB libspdk_accel.a 00:35:42.408 SYMLINK libspdk_nvme.so 00:35:42.408 SO libspdk_accel.so.16.0 00:35:42.408 LIB libspdk_event.a 00:35:42.667 SO libspdk_event.so.14.0 00:35:42.667 SYMLINK libspdk_accel.so 00:35:42.667 SYMLINK libspdk_event.so 00:35:42.926 LIB libspdk_fuse_dispatcher.a 00:35:42.926 CC lib/bdev/bdev.o 00:35:42.926 CC lib/bdev/bdev_rpc.o 00:35:42.926 CC lib/bdev/bdev_zone.o 00:35:42.926 CC lib/bdev/part.o 00:35:42.926 CC lib/bdev/scsi_nvme.o 00:35:42.926 SO libspdk_fuse_dispatcher.so.1.0 00:35:43.184 SYMLINK libspdk_fuse_dispatcher.so 00:35:44.573 LIB libspdk_blob.a 00:35:44.573 SO libspdk_blob.so.12.0 00:35:44.573 SYMLINK libspdk_blob.so 00:35:45.144 CC lib/lvol/lvol.o 00:35:45.144 LIB libspdk_bdev.a 00:35:45.144 CC lib/blobfs/tree.o 00:35:45.144 CC lib/blobfs/blobfs.o 00:35:45.144 SO libspdk_bdev.so.17.0 00:35:45.144 SYMLINK libspdk_bdev.so 00:35:45.407 CC lib/ublk/ublk.o 00:35:45.407 CC lib/ublk/ublk_rpc.o 00:35:45.407 CC lib/ftl/ftl_core.o 00:35:45.407 CC lib/nvmf/ctrlr.o 00:35:45.407 CC lib/nvmf/ctrlr_discovery.o 00:35:45.407 CC lib/ftl/ftl_init.o 00:35:45.407 CC lib/nvmf/ctrlr_bdev.o 00:35:45.407 CC lib/nvmf/nvmf_rpc.o 00:35:45.407 CC lib/ftl/ftl_layout.o 00:35:45.407 CC lib/ftl/ftl_debug.o 00:35:45.407 CC lib/nvmf/nvmf.o 00:35:45.407 CC lib/nvmf/subsystem.o 00:35:45.407 CC lib/nbd/nbd.o 00:35:45.407 CC lib/nbd/nbd_rpc.o 00:35:45.407 CC lib/ftl/ftl_sb.o 00:35:45.407 CC lib/nvmf/tcp.o 00:35:45.407 CC lib/ftl/ftl_io.o 00:35:45.407 CC lib/nvmf/transport.o 00:35:45.407 CC lib/ftl/ftl_l2p.o 00:35:45.407 CC lib/scsi/dev.o 00:35:45.407 CC lib/scsi/lun.o 00:35:45.407 CC lib/ftl/ftl_l2p_flat.o 00:35:45.407 CC lib/nvmf/mdns_server.o 00:35:45.407 CC lib/scsi/port.o 00:35:45.407 CC lib/ftl/ftl_band.o 00:35:45.407 CC lib/ftl/ftl_nv_cache.o 00:35:45.407 CC lib/nvmf/stubs.o 00:35:45.407 CC lib/nvmf/vfio_user.o 00:35:45.407 CC lib/scsi/scsi.o 00:35:45.407 CC lib/scsi/scsi_bdev.o 00:35:45.407 CC lib/ftl/ftl_band_ops.o 00:35:45.407 CC lib/ftl/ftl_writer.o 00:35:45.407 CC lib/ftl/ftl_rq.o 00:35:45.407 CC lib/scsi/scsi_pr.o 00:35:45.407 CC lib/nvmf/rdma.o 00:35:45.408 CC lib/ftl/ftl_reloc.o 00:35:45.408 CC lib/scsi/task.o 00:35:45.408 CC lib/scsi/scsi_rpc.o 00:35:45.408 CC lib/nvmf/auth.o 00:35:45.682 CC lib/ftl/ftl_l2p_cache.o 00:35:45.682 CC lib/ftl/ftl_p2l.o 00:35:45.682 CC lib/ftl/ftl_p2l_log.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_startup.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_misc.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_md.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_band.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:35:45.682 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:35:45.682 CC lib/ftl/utils/ftl_conf.o 00:35:45.682 CC lib/ftl/utils/ftl_md.o 00:35:45.682 CC lib/ftl/utils/ftl_mempool.o 00:35:45.682 CC lib/ftl/utils/ftl_bitmap.o 00:35:45.682 CC lib/ftl/utils/ftl_property.o 00:35:45.682 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:35:45.682 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:35:45.682 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:35:45.682 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:35:45.682 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:35:45.682 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:35:45.682 CC lib/ftl/upgrade/ftl_sb_v3.o 00:35:45.682 CC lib/ftl/upgrade/ftl_sb_v5.o 00:35:45.682 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:35:45.682 CC lib/ftl/nvc/ftl_nvc_dev.o 00:35:45.945 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:35:45.945 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:35:45.945 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:35:45.945 CC lib/ftl/base/ftl_base_dev.o 00:35:45.945 CC lib/ftl/ftl_trace.o 00:35:45.945 CC lib/ftl/base/ftl_base_bdev.o 00:35:46.206 LIB libspdk_blobfs.a 00:35:46.206 SO libspdk_blobfs.so.11.0 00:35:46.206 SYMLINK libspdk_blobfs.so 00:35:46.466 LIB libspdk_nbd.a 00:35:46.466 LIB libspdk_lvol.a 00:35:46.466 SO libspdk_nbd.so.7.0 00:35:46.466 SO libspdk_lvol.so.11.0 00:35:46.466 LIB libspdk_scsi.a 00:35:46.466 SYMLINK libspdk_lvol.so 00:35:46.466 SYMLINK libspdk_nbd.so 00:35:46.466 SO libspdk_scsi.so.9.0 00:35:46.466 LIB libspdk_ublk.a 00:35:46.466 SYMLINK libspdk_scsi.so 00:35:46.726 SO libspdk_ublk.so.3.0 00:35:46.726 SYMLINK libspdk_ublk.so 00:35:46.986 LIB libspdk_ftl.a 00:35:46.986 CC lib/iscsi/init_grp.o 00:35:46.986 CC lib/iscsi/conn.o 00:35:46.986 CC lib/iscsi/iscsi.o 00:35:46.986 CC lib/iscsi/tgt_node.o 00:35:46.986 CC lib/iscsi/param.o 00:35:46.986 CC lib/iscsi/portal_grp.o 00:35:46.986 CC lib/vhost/vhost.o 00:35:46.986 CC lib/vhost/vhost_blk.o 00:35:46.986 CC lib/iscsi/iscsi_subsystem.o 00:35:46.986 CC lib/vhost/vhost_rpc.o 00:35:46.986 CC lib/iscsi/iscsi_rpc.o 00:35:46.986 CC lib/iscsi/task.o 00:35:46.986 CC lib/vhost/vhost_scsi.o 00:35:46.986 CC lib/vhost/rte_vhost_user.o 00:35:47.245 SO libspdk_ftl.so.9.0 00:35:47.504 SYMLINK libspdk_ftl.so 00:35:48.073 LIB libspdk_vhost.a 00:35:48.073 LIB libspdk_nvmf.a 00:35:48.073 SO libspdk_vhost.so.8.0 00:35:48.332 SO libspdk_nvmf.so.20.0 00:35:48.332 SYMLINK libspdk_vhost.so 00:35:48.332 LIB libspdk_iscsi.a 00:35:48.592 SYMLINK libspdk_nvmf.so 00:35:48.592 SO libspdk_iscsi.so.8.0 00:35:48.592 SYMLINK libspdk_iscsi.so 00:35:49.162 CC module/vfu_device/vfu_virtio_blk.o 00:35:49.162 CC module/vfu_device/vfu_virtio.o 00:35:49.162 CC module/vfu_device/vfu_virtio_scsi.o 00:35:49.163 CC module/vfu_device/vfu_virtio_fs.o 00:35:49.163 CC module/vfu_device/vfu_virtio_rpc.o 00:35:49.163 CC module/env_dpdk/env_dpdk_rpc.o 00:35:49.425 CC module/accel/ioat/accel_ioat.o 00:35:49.425 CC module/accel/ioat/accel_ioat_rpc.o 00:35:49.425 CC module/accel/error/accel_error.o 00:35:49.425 CC module/accel/error/accel_error_rpc.o 00:35:49.425 CC module/keyring/file/keyring.o 00:35:49.425 CC module/keyring/file/keyring_rpc.o 00:35:49.425 CC module/accel/iaa/accel_iaa_rpc.o 00:35:49.425 CC module/keyring/linux/keyring.o 00:35:49.425 CC module/accel/iaa/accel_iaa.o 00:35:49.425 CC module/accel/dsa/accel_dsa.o 00:35:49.425 CC module/keyring/linux/keyring_rpc.o 00:35:49.425 CC module/accel/dsa/accel_dsa_rpc.o 00:35:49.425 CC module/blob/bdev/blob_bdev.o 00:35:49.425 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:35:49.425 CC module/scheduler/dynamic/scheduler_dynamic.o 00:35:49.425 CC module/sock/posix/posix.o 00:35:49.425 CC module/fsdev/aio/fsdev_aio_rpc.o 00:35:49.425 CC module/fsdev/aio/fsdev_aio.o 00:35:49.425 CC module/fsdev/aio/linux_aio_mgr.o 00:35:49.425 CC module/scheduler/gscheduler/gscheduler.o 00:35:49.425 LIB libspdk_env_dpdk_rpc.a 00:35:49.425 SO libspdk_env_dpdk_rpc.so.6.0 00:35:49.425 SYMLINK libspdk_env_dpdk_rpc.so 00:35:49.683 LIB libspdk_accel_ioat.a 00:35:49.683 LIB libspdk_keyring_linux.a 00:35:49.683 SO libspdk_accel_ioat.so.6.0 00:35:49.683 LIB libspdk_keyring_file.a 00:35:49.683 LIB libspdk_scheduler_dpdk_governor.a 00:35:49.683 LIB libspdk_scheduler_gscheduler.a 00:35:49.683 SO libspdk_keyring_linux.so.1.0 00:35:49.683 SO libspdk_scheduler_dpdk_governor.so.4.0 00:35:49.683 LIB libspdk_accel_error.a 00:35:49.683 SO libspdk_keyring_file.so.2.0 00:35:49.683 LIB libspdk_accel_iaa.a 00:35:49.683 LIB libspdk_scheduler_dynamic.a 00:35:49.683 SO libspdk_scheduler_gscheduler.so.4.0 00:35:49.683 SO libspdk_accel_error.so.2.0 00:35:49.683 SYMLINK libspdk_accel_ioat.so 00:35:49.683 SO libspdk_accel_iaa.so.3.0 00:35:49.683 SO libspdk_scheduler_dynamic.so.4.0 00:35:49.683 SYMLINK libspdk_scheduler_dpdk_governor.so 00:35:49.683 SYMLINK libspdk_keyring_linux.so 00:35:49.683 LIB libspdk_blob_bdev.a 00:35:49.683 SYMLINK libspdk_keyring_file.so 00:35:49.683 LIB libspdk_accel_dsa.a 00:35:49.683 SYMLINK libspdk_scheduler_gscheduler.so 00:35:49.683 SYMLINK libspdk_accel_error.so 00:35:49.683 SO libspdk_blob_bdev.so.12.0 00:35:49.683 SYMLINK libspdk_scheduler_dynamic.so 00:35:49.683 SYMLINK libspdk_accel_iaa.so 00:35:49.683 SO libspdk_accel_dsa.so.5.0 00:35:49.942 SYMLINK libspdk_blob_bdev.so 00:35:49.942 SYMLINK libspdk_accel_dsa.so 00:35:49.942 LIB libspdk_vfu_device.a 00:35:49.942 SO libspdk_vfu_device.so.3.0 00:35:50.200 SYMLINK libspdk_vfu_device.so 00:35:50.200 LIB libspdk_fsdev_aio.a 00:35:50.200 SO libspdk_fsdev_aio.so.1.0 00:35:50.200 LIB libspdk_sock_posix.a 00:35:50.200 SO libspdk_sock_posix.so.6.0 00:35:50.200 SYMLINK libspdk_fsdev_aio.so 00:35:50.200 CC module/blobfs/bdev/blobfs_bdev.o 00:35:50.200 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:35:50.200 CC module/bdev/error/vbdev_error_rpc.o 00:35:50.200 CC module/bdev/error/vbdev_error.o 00:35:50.200 CC module/bdev/delay/vbdev_delay.o 00:35:50.200 CC module/bdev/delay/vbdev_delay_rpc.o 00:35:50.200 CC module/bdev/malloc/bdev_malloc.o 00:35:50.200 CC module/bdev/zone_block/vbdev_zone_block.o 00:35:50.200 CC module/bdev/malloc/bdev_malloc_rpc.o 00:35:50.200 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:35:50.200 CC module/bdev/lvol/vbdev_lvol.o 00:35:50.460 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:35:50.460 CC module/bdev/raid/bdev_raid_rpc.o 00:35:50.460 CC module/bdev/raid/bdev_raid_sb.o 00:35:50.460 CC module/bdev/raid/raid1.o 00:35:50.460 CC module/bdev/raid/bdev_raid.o 00:35:50.460 CC module/bdev/raid/raid0.o 00:35:50.460 CC module/bdev/raid/concat.o 00:35:50.460 CC module/bdev/virtio/bdev_virtio_blk.o 00:35:50.460 CC module/bdev/virtio/bdev_virtio_rpc.o 00:35:50.460 CC module/bdev/virtio/bdev_virtio_scsi.o 00:35:50.460 CC module/bdev/gpt/gpt.o 00:35:50.460 CC module/bdev/split/vbdev_split.o 00:35:50.460 CC module/bdev/gpt/vbdev_gpt.o 00:35:50.460 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:35:50.460 CC module/bdev/passthru/vbdev_passthru.o 00:35:50.460 CC module/bdev/split/vbdev_split_rpc.o 00:35:50.460 CC module/bdev/ftl/bdev_ftl.o 00:35:50.460 CC module/bdev/ftl/bdev_ftl_rpc.o 00:35:50.460 CC module/bdev/nvme/bdev_nvme.o 00:35:50.460 CC module/bdev/nvme/bdev_nvme_rpc.o 00:35:50.460 CC module/bdev/nvme/nvme_rpc.o 00:35:50.460 CC module/bdev/null/bdev_null.o 00:35:50.460 CC module/bdev/nvme/bdev_mdns_client.o 00:35:50.460 CC module/bdev/nvme/vbdev_opal.o 00:35:50.460 CC module/bdev/null/bdev_null_rpc.o 00:35:50.460 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:35:50.460 CC module/bdev/nvme/vbdev_opal_rpc.o 00:35:50.460 CC module/bdev/iscsi/bdev_iscsi.o 00:35:50.460 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:35:50.460 CC module/bdev/aio/bdev_aio.o 00:35:50.460 CC module/bdev/aio/bdev_aio_rpc.o 00:35:50.460 SYMLINK libspdk_sock_posix.so 00:35:50.719 LIB libspdk_bdev_error.a 00:35:50.719 LIB libspdk_bdev_split.a 00:35:50.719 SO libspdk_bdev_error.so.6.0 00:35:50.719 LIB libspdk_bdev_delay.a 00:35:50.719 LIB libspdk_blobfs_bdev.a 00:35:50.719 LIB libspdk_bdev_null.a 00:35:50.719 SO libspdk_bdev_split.so.6.0 00:35:50.719 LIB libspdk_bdev_passthru.a 00:35:50.719 LIB libspdk_bdev_gpt.a 00:35:50.719 SO libspdk_bdev_delay.so.6.0 00:35:50.719 SO libspdk_blobfs_bdev.so.6.0 00:35:50.719 SO libspdk_bdev_null.so.6.0 00:35:50.719 SO libspdk_bdev_gpt.so.6.0 00:35:50.719 SYMLINK libspdk_bdev_error.so 00:35:50.719 SO libspdk_bdev_passthru.so.6.0 00:35:50.719 SYMLINK libspdk_bdev_split.so 00:35:50.719 SYMLINK libspdk_bdev_delay.so 00:35:50.719 LIB libspdk_bdev_iscsi.a 00:35:50.719 SYMLINK libspdk_bdev_gpt.so 00:35:50.719 SYMLINK libspdk_bdev_null.so 00:35:50.979 SYMLINK libspdk_blobfs_bdev.so 00:35:50.979 SYMLINK libspdk_bdev_passthru.so 00:35:50.979 SO libspdk_bdev_iscsi.so.6.0 00:35:50.979 LIB libspdk_bdev_zone_block.a 00:35:50.979 LIB libspdk_bdev_ftl.a 00:35:50.979 SO libspdk_bdev_zone_block.so.6.0 00:35:50.979 LIB libspdk_bdev_aio.a 00:35:50.979 SO libspdk_bdev_ftl.so.6.0 00:35:50.979 LIB libspdk_bdev_malloc.a 00:35:50.979 SYMLINK libspdk_bdev_iscsi.so 00:35:50.979 LIB libspdk_bdev_virtio.a 00:35:50.979 SO libspdk_bdev_malloc.so.6.0 00:35:50.979 SO libspdk_bdev_aio.so.6.0 00:35:50.979 SYMLINK libspdk_bdev_ftl.so 00:35:50.979 SYMLINK libspdk_bdev_zone_block.so 00:35:50.979 SO libspdk_bdev_virtio.so.6.0 00:35:50.979 SYMLINK libspdk_bdev_malloc.so 00:35:50.979 SYMLINK libspdk_bdev_aio.so 00:35:50.979 SYMLINK libspdk_bdev_virtio.so 00:35:50.979 LIB libspdk_bdev_lvol.a 00:35:51.239 SO libspdk_bdev_lvol.so.6.0 00:35:51.239 SYMLINK libspdk_bdev_lvol.so 00:35:51.498 LIB libspdk_bdev_raid.a 00:35:51.498 SO libspdk_bdev_raid.so.6.0 00:35:51.758 SYMLINK libspdk_bdev_raid.so 00:35:53.139 LIB libspdk_bdev_nvme.a 00:35:53.398 SO libspdk_bdev_nvme.so.7.1 00:35:53.398 SYMLINK libspdk_bdev_nvme.so 00:35:54.336 CC module/event/subsystems/sock/sock.o 00:35:54.336 CC module/event/subsystems/vmd/vmd.o 00:35:54.336 CC module/event/subsystems/vmd/vmd_rpc.o 00:35:54.336 CC module/event/subsystems/keyring/keyring.o 00:35:54.336 CC module/event/subsystems/scheduler/scheduler.o 00:35:54.336 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:35:54.336 CC module/event/subsystems/iobuf/iobuf.o 00:35:54.336 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:35:54.336 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:35:54.336 CC module/event/subsystems/fsdev/fsdev.o 00:35:54.336 LIB libspdk_event_vfu_tgt.a 00:35:54.336 LIB libspdk_event_scheduler.a 00:35:54.336 LIB libspdk_event_fsdev.a 00:35:54.336 LIB libspdk_event_vmd.a 00:35:54.336 SO libspdk_event_scheduler.so.4.0 00:35:54.336 LIB libspdk_event_sock.a 00:35:54.336 SO libspdk_event_vfu_tgt.so.3.0 00:35:54.336 LIB libspdk_event_keyring.a 00:35:54.336 LIB libspdk_event_iobuf.a 00:35:54.336 SO libspdk_event_vmd.so.6.0 00:35:54.336 SO libspdk_event_fsdev.so.1.0 00:35:54.336 LIB libspdk_event_vhost_blk.a 00:35:54.336 SO libspdk_event_sock.so.5.0 00:35:54.336 SO libspdk_event_keyring.so.1.0 00:35:54.336 SO libspdk_event_iobuf.so.3.0 00:35:54.336 SYMLINK libspdk_event_scheduler.so 00:35:54.336 SO libspdk_event_vhost_blk.so.3.0 00:35:54.336 SYMLINK libspdk_event_vfu_tgt.so 00:35:54.336 SYMLINK libspdk_event_sock.so 00:35:54.336 SYMLINK libspdk_event_fsdev.so 00:35:54.336 SYMLINK libspdk_event_keyring.so 00:35:54.336 SYMLINK libspdk_event_vmd.so 00:35:54.595 SYMLINK libspdk_event_iobuf.so 00:35:54.595 SYMLINK libspdk_event_vhost_blk.so 00:35:54.855 CC module/event/subsystems/accel/accel.o 00:35:55.114 LIB libspdk_event_accel.a 00:35:55.114 SO libspdk_event_accel.so.6.0 00:35:55.114 SYMLINK libspdk_event_accel.so 00:35:55.686 CC module/event/subsystems/bdev/bdev.o 00:35:55.686 LIB libspdk_event_bdev.a 00:35:55.686 SO libspdk_event_bdev.so.6.0 00:35:55.686 SYMLINK libspdk_event_bdev.so 00:35:56.256 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:35:56.256 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:35:56.256 CC module/event/subsystems/ublk/ublk.o 00:35:56.256 CC module/event/subsystems/scsi/scsi.o 00:35:56.256 CC module/event/subsystems/nbd/nbd.o 00:35:56.256 LIB libspdk_event_ublk.a 00:35:56.256 LIB libspdk_event_nbd.a 00:35:56.256 LIB libspdk_event_scsi.a 00:35:56.256 SO libspdk_event_ublk.so.3.0 00:35:56.516 SO libspdk_event_nbd.so.6.0 00:35:56.516 SO libspdk_event_scsi.so.6.0 00:35:56.516 LIB libspdk_event_nvmf.a 00:35:56.516 SO libspdk_event_nvmf.so.6.0 00:35:56.516 SYMLINK libspdk_event_nbd.so 00:35:56.516 SYMLINK libspdk_event_ublk.so 00:35:56.516 SYMLINK libspdk_event_scsi.so 00:35:56.516 SYMLINK libspdk_event_nvmf.so 00:35:56.775 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:35:56.775 CC module/event/subsystems/iscsi/iscsi.o 00:35:57.035 LIB libspdk_event_vhost_scsi.a 00:35:57.035 SO libspdk_event_vhost_scsi.so.3.0 00:35:57.035 SYMLINK libspdk_event_vhost_scsi.so 00:35:57.035 LIB libspdk_event_iscsi.a 00:35:57.035 SO libspdk_event_iscsi.so.6.0 00:35:57.295 SYMLINK libspdk_event_iscsi.so 00:35:57.295 SO libspdk.so.6.0 00:35:57.295 SYMLINK libspdk.so 00:35:57.871 CC app/trace_record/trace_record.o 00:35:57.871 CXX app/trace/trace.o 00:35:57.871 CC app/spdk_nvme_discover/discovery_aer.o 00:35:57.871 CC app/spdk_nvme_identify/identify.o 00:35:57.871 CC app/spdk_top/spdk_top.o 00:35:57.871 CC app/spdk_nvme_perf/perf.o 00:35:57.871 CC app/spdk_lspci/spdk_lspci.o 00:35:57.871 CC test/rpc_client/rpc_client_test.o 00:35:57.871 TEST_HEADER include/spdk/accel.h 00:35:57.871 TEST_HEADER include/spdk/barrier.h 00:35:57.871 TEST_HEADER include/spdk/accel_module.h 00:35:57.871 TEST_HEADER include/spdk/assert.h 00:35:57.871 TEST_HEADER include/spdk/base64.h 00:35:57.871 TEST_HEADER include/spdk/bdev.h 00:35:57.871 TEST_HEADER include/spdk/bdev_module.h 00:35:57.872 TEST_HEADER include/spdk/bdev_zone.h 00:35:57.872 TEST_HEADER include/spdk/bit_array.h 00:35:57.872 TEST_HEADER include/spdk/bit_pool.h 00:35:57.872 TEST_HEADER include/spdk/blob_bdev.h 00:35:57.872 TEST_HEADER include/spdk/blobfs_bdev.h 00:35:57.872 TEST_HEADER include/spdk/blobfs.h 00:35:57.872 TEST_HEADER include/spdk/blob.h 00:35:57.872 TEST_HEADER include/spdk/conf.h 00:35:57.872 TEST_HEADER include/spdk/config.h 00:35:57.872 TEST_HEADER include/spdk/cpuset.h 00:35:57.872 TEST_HEADER include/spdk/crc16.h 00:35:57.872 TEST_HEADER include/spdk/crc32.h 00:35:57.872 TEST_HEADER include/spdk/crc64.h 00:35:57.872 TEST_HEADER include/spdk/dif.h 00:35:57.872 TEST_HEADER include/spdk/dma.h 00:35:57.872 TEST_HEADER include/spdk/endian.h 00:35:57.872 TEST_HEADER include/spdk/env.h 00:35:57.872 TEST_HEADER include/spdk/event.h 00:35:57.872 TEST_HEADER include/spdk/env_dpdk.h 00:35:57.872 TEST_HEADER include/spdk/fd_group.h 00:35:57.872 TEST_HEADER include/spdk/fd.h 00:35:57.872 TEST_HEADER include/spdk/file.h 00:35:57.872 TEST_HEADER include/spdk/fsdev.h 00:35:57.872 TEST_HEADER include/spdk/ftl.h 00:35:57.872 TEST_HEADER include/spdk/fsdev_module.h 00:35:57.872 CC app/iscsi_tgt/iscsi_tgt.o 00:35:57.872 TEST_HEADER include/spdk/fuse_dispatcher.h 00:35:57.872 TEST_HEADER include/spdk/gpt_spec.h 00:35:57.872 TEST_HEADER include/spdk/hexlify.h 00:35:57.872 TEST_HEADER include/spdk/idxd.h 00:35:57.872 TEST_HEADER include/spdk/histogram_data.h 00:35:57.872 CC app/spdk_dd/spdk_dd.o 00:35:57.872 TEST_HEADER include/spdk/idxd_spec.h 00:35:57.872 TEST_HEADER include/spdk/init.h 00:35:57.872 TEST_HEADER include/spdk/ioat_spec.h 00:35:57.872 TEST_HEADER include/spdk/ioat.h 00:35:57.872 CC examples/interrupt_tgt/interrupt_tgt.o 00:35:57.872 TEST_HEADER include/spdk/iscsi_spec.h 00:35:57.872 TEST_HEADER include/spdk/json.h 00:35:57.872 TEST_HEADER include/spdk/jsonrpc.h 00:35:57.872 TEST_HEADER include/spdk/keyring.h 00:35:57.872 TEST_HEADER include/spdk/keyring_module.h 00:35:57.872 CC app/nvmf_tgt/nvmf_main.o 00:35:57.872 TEST_HEADER include/spdk/likely.h 00:35:57.872 TEST_HEADER include/spdk/lvol.h 00:35:57.872 TEST_HEADER include/spdk/log.h 00:35:57.872 TEST_HEADER include/spdk/md5.h 00:35:57.872 TEST_HEADER include/spdk/memory.h 00:35:57.872 TEST_HEADER include/spdk/mmio.h 00:35:57.872 TEST_HEADER include/spdk/nbd.h 00:35:57.872 TEST_HEADER include/spdk/net.h 00:35:57.872 TEST_HEADER include/spdk/nvme.h 00:35:57.872 TEST_HEADER include/spdk/notify.h 00:35:57.872 TEST_HEADER include/spdk/nvme_intel.h 00:35:57.872 TEST_HEADER include/spdk/nvme_ocssd.h 00:35:57.872 TEST_HEADER include/spdk/nvme_zns.h 00:35:57.872 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:35:57.872 TEST_HEADER include/spdk/nvme_spec.h 00:35:57.872 TEST_HEADER include/spdk/nvmf_cmd.h 00:35:57.872 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:35:57.872 TEST_HEADER include/spdk/nvmf.h 00:35:57.872 TEST_HEADER include/spdk/nvmf_spec.h 00:35:57.872 TEST_HEADER include/spdk/opal.h 00:35:57.872 TEST_HEADER include/spdk/nvmf_transport.h 00:35:57.872 TEST_HEADER include/spdk/opal_spec.h 00:35:57.872 TEST_HEADER include/spdk/pci_ids.h 00:35:57.872 TEST_HEADER include/spdk/pipe.h 00:35:57.872 TEST_HEADER include/spdk/queue.h 00:35:57.872 TEST_HEADER include/spdk/rpc.h 00:35:57.872 TEST_HEADER include/spdk/reduce.h 00:35:57.872 TEST_HEADER include/spdk/scheduler.h 00:35:57.872 TEST_HEADER include/spdk/scsi.h 00:35:57.872 TEST_HEADER include/spdk/scsi_spec.h 00:35:57.872 TEST_HEADER include/spdk/sock.h 00:35:57.872 TEST_HEADER include/spdk/stdinc.h 00:35:57.872 CC app/spdk_tgt/spdk_tgt.o 00:35:57.872 TEST_HEADER include/spdk/string.h 00:35:57.872 TEST_HEADER include/spdk/thread.h 00:35:57.872 TEST_HEADER include/spdk/trace.h 00:35:57.872 TEST_HEADER include/spdk/trace_parser.h 00:35:57.872 TEST_HEADER include/spdk/tree.h 00:35:57.872 TEST_HEADER include/spdk/ublk.h 00:35:57.872 TEST_HEADER include/spdk/util.h 00:35:57.872 TEST_HEADER include/spdk/uuid.h 00:35:57.872 TEST_HEADER include/spdk/version.h 00:35:57.872 TEST_HEADER include/spdk/vfio_user_pci.h 00:35:57.872 TEST_HEADER include/spdk/vfio_user_spec.h 00:35:57.872 TEST_HEADER include/spdk/vhost.h 00:35:57.872 TEST_HEADER include/spdk/vmd.h 00:35:57.872 TEST_HEADER include/spdk/xor.h 00:35:57.872 TEST_HEADER include/spdk/zipf.h 00:35:57.872 CXX test/cpp_headers/accel.o 00:35:57.872 CXX test/cpp_headers/assert.o 00:35:57.872 CXX test/cpp_headers/accel_module.o 00:35:57.872 CXX test/cpp_headers/barrier.o 00:35:57.872 CXX test/cpp_headers/base64.o 00:35:57.872 CXX test/cpp_headers/bdev_zone.o 00:35:57.872 CXX test/cpp_headers/bdev_module.o 00:35:57.872 CXX test/cpp_headers/bdev.o 00:35:57.872 CXX test/cpp_headers/bit_array.o 00:35:57.872 CXX test/cpp_headers/bit_pool.o 00:35:57.872 CXX test/cpp_headers/blob_bdev.o 00:35:57.872 CXX test/cpp_headers/blobfs.o 00:35:57.872 CXX test/cpp_headers/blobfs_bdev.o 00:35:57.872 CXX test/cpp_headers/conf.o 00:35:57.872 CXX test/cpp_headers/blob.o 00:35:57.872 CXX test/cpp_headers/cpuset.o 00:35:57.872 CXX test/cpp_headers/crc16.o 00:35:57.872 CXX test/cpp_headers/config.o 00:35:57.872 CXX test/cpp_headers/crc32.o 00:35:57.872 CXX test/cpp_headers/crc64.o 00:35:57.872 CXX test/cpp_headers/dif.o 00:35:57.872 CXX test/cpp_headers/endian.o 00:35:57.872 CXX test/cpp_headers/dma.o 00:35:57.872 CXX test/cpp_headers/env.o 00:35:57.872 CXX test/cpp_headers/env_dpdk.o 00:35:57.872 CXX test/cpp_headers/event.o 00:35:57.872 CXX test/cpp_headers/fd_group.o 00:35:57.872 CXX test/cpp_headers/fd.o 00:35:57.872 CXX test/cpp_headers/file.o 00:35:57.872 CXX test/cpp_headers/fsdev.o 00:35:57.872 CXX test/cpp_headers/fsdev_module.o 00:35:57.872 CXX test/cpp_headers/ftl.o 00:35:57.872 CXX test/cpp_headers/fuse_dispatcher.o 00:35:57.872 CXX test/cpp_headers/gpt_spec.o 00:35:57.872 CXX test/cpp_headers/hexlify.o 00:35:57.872 CXX test/cpp_headers/histogram_data.o 00:35:57.872 CXX test/cpp_headers/idxd.o 00:35:57.872 CXX test/cpp_headers/init.o 00:35:57.872 CXX test/cpp_headers/idxd_spec.o 00:35:57.872 CXX test/cpp_headers/ioat.o 00:35:57.872 CXX test/cpp_headers/ioat_spec.o 00:35:57.872 CXX test/cpp_headers/iscsi_spec.o 00:35:57.872 CC app/fio/nvme/fio_plugin.o 00:35:57.872 CC test/app/histogram_perf/histogram_perf.o 00:35:57.872 CC examples/util/zipf/zipf.o 00:35:57.872 CXX test/cpp_headers/json.o 00:35:57.872 CC app/fio/bdev/fio_plugin.o 00:35:57.872 CC test/app/jsoncat/jsoncat.o 00:35:57.872 CC test/app/stub/stub.o 00:35:57.872 CC test/env/vtophys/vtophys.o 00:35:57.872 CC test/env/memory/memory_ut.o 00:35:58.142 CC examples/ioat/perf/perf.o 00:35:58.142 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:35:58.142 LINK spdk_lspci 00:35:58.142 CC examples/ioat/verify/verify.o 00:35:58.142 CC test/env/pci/pci_ut.o 00:35:58.142 CC test/thread/poller_perf/poller_perf.o 00:35:58.142 CC test/app/bdev_svc/bdev_svc.o 00:35:58.142 CC test/dma/test_dma/test_dma.o 00:35:58.142 LINK spdk_nvme_discover 00:35:58.409 LINK rpc_client_test 00:35:58.409 CC test/env/mem_callbacks/mem_callbacks.o 00:35:58.409 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:35:58.409 LINK interrupt_tgt 00:35:58.409 LINK spdk_trace_record 00:35:58.409 LINK nvmf_tgt 00:35:58.409 CXX test/cpp_headers/jsonrpc.o 00:35:58.409 LINK vtophys 00:35:58.409 CXX test/cpp_headers/keyring.o 00:35:58.409 LINK zipf 00:35:58.409 LINK histogram_perf 00:35:58.409 LINK env_dpdk_post_init 00:35:58.409 CXX test/cpp_headers/keyring_module.o 00:35:58.409 LINK iscsi_tgt 00:35:58.409 CXX test/cpp_headers/likely.o 00:35:58.409 CXX test/cpp_headers/log.o 00:35:58.409 LINK stub 00:35:58.409 CXX test/cpp_headers/lvol.o 00:35:58.409 CXX test/cpp_headers/md5.o 00:35:58.409 CXX test/cpp_headers/memory.o 00:35:58.409 CXX test/cpp_headers/mmio.o 00:35:58.409 LINK poller_perf 00:35:58.409 LINK jsoncat 00:35:58.677 CXX test/cpp_headers/nbd.o 00:35:58.677 CXX test/cpp_headers/net.o 00:35:58.677 CXX test/cpp_headers/notify.o 00:35:58.677 CXX test/cpp_headers/nvme.o 00:35:58.677 CXX test/cpp_headers/nvme_intel.o 00:35:58.677 CXX test/cpp_headers/nvme_ocssd.o 00:35:58.677 CXX test/cpp_headers/nvme_ocssd_spec.o 00:35:58.677 CXX test/cpp_headers/nvme_spec.o 00:35:58.677 CXX test/cpp_headers/nvme_zns.o 00:35:58.677 CXX test/cpp_headers/nvmf_cmd.o 00:35:58.677 CXX test/cpp_headers/nvmf_fc_spec.o 00:35:58.677 CXX test/cpp_headers/nvmf.o 00:35:58.677 CXX test/cpp_headers/nvmf_spec.o 00:35:58.677 CXX test/cpp_headers/opal.o 00:35:58.677 CXX test/cpp_headers/nvmf_transport.o 00:35:58.677 CXX test/cpp_headers/opal_spec.o 00:35:58.677 LINK spdk_tgt 00:35:58.677 CXX test/cpp_headers/pci_ids.o 00:35:58.677 CXX test/cpp_headers/pipe.o 00:35:58.677 LINK ioat_perf 00:35:58.677 CXX test/cpp_headers/queue.o 00:35:58.677 CXX test/cpp_headers/reduce.o 00:35:58.677 CXX test/cpp_headers/rpc.o 00:35:58.677 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:35:58.677 CXX test/cpp_headers/scheduler.o 00:35:58.677 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:35:58.677 CXX test/cpp_headers/scsi.o 00:35:58.677 CXX test/cpp_headers/scsi_spec.o 00:35:58.677 CXX test/cpp_headers/sock.o 00:35:58.677 CXX test/cpp_headers/stdinc.o 00:35:58.677 CXX test/cpp_headers/string.o 00:35:58.677 CXX test/cpp_headers/thread.o 00:35:58.677 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:35:58.677 CXX test/cpp_headers/trace.o 00:35:58.677 CXX test/cpp_headers/trace_parser.o 00:35:58.677 CXX test/cpp_headers/tree.o 00:35:58.677 CXX test/cpp_headers/ublk.o 00:35:58.677 LINK bdev_svc 00:35:58.677 CXX test/cpp_headers/util.o 00:35:58.677 CXX test/cpp_headers/vfio_user_pci.o 00:35:58.677 CXX test/cpp_headers/version.o 00:35:58.677 CXX test/cpp_headers/uuid.o 00:35:58.677 CXX test/cpp_headers/vfio_user_spec.o 00:35:58.677 LINK verify 00:35:58.677 CXX test/cpp_headers/vhost.o 00:35:58.677 CXX test/cpp_headers/vmd.o 00:35:58.677 CXX test/cpp_headers/xor.o 00:35:58.677 CXX test/cpp_headers/zipf.o 00:35:58.677 LINK spdk_trace 00:35:58.938 LINK spdk_dd 00:35:58.938 LINK pci_ut 00:35:59.199 LINK nvme_fuzz 00:35:59.199 LINK spdk_bdev 00:35:59.199 LINK spdk_nvme 00:35:59.199 CC test/event/reactor_perf/reactor_perf.o 00:35:59.199 CC examples/sock/hello_world/hello_sock.o 00:35:59.199 CC test/event/event_perf/event_perf.o 00:35:59.199 CC test/event/reactor/reactor.o 00:35:59.199 CC examples/idxd/perf/perf.o 00:35:59.199 CC examples/vmd/led/led.o 00:35:59.199 CC test/event/app_repeat/app_repeat.o 00:35:59.199 CC examples/vmd/lsvmd/lsvmd.o 00:35:59.199 CC examples/thread/thread/thread_ex.o 00:35:59.199 CC test/event/scheduler/scheduler.o 00:35:59.199 LINK test_dma 00:35:59.199 CC app/vhost/vhost.o 00:35:59.459 LINK spdk_top 00:35:59.459 LINK spdk_nvme_perf 00:35:59.459 LINK mem_callbacks 00:35:59.459 LINK vhost_fuzz 00:35:59.459 LINK event_perf 00:35:59.459 LINK reactor_perf 00:35:59.459 LINK spdk_nvme_identify 00:35:59.459 LINK lsvmd 00:35:59.459 LINK reactor 00:35:59.459 LINK app_repeat 00:35:59.459 LINK led 00:35:59.459 LINK hello_sock 00:35:59.459 LINK thread 00:35:59.459 LINK vhost 00:35:59.720 LINK scheduler 00:35:59.720 LINK idxd_perf 00:35:59.720 CC test/nvme/cuse/cuse.o 00:35:59.720 CC test/nvme/aer/aer.o 00:35:59.720 CC test/nvme/compliance/nvme_compliance.o 00:35:59.720 CC test/nvme/err_injection/err_injection.o 00:35:59.720 CC test/nvme/simple_copy/simple_copy.o 00:35:59.720 CC test/nvme/overhead/overhead.o 00:35:59.720 CC test/nvme/fdp/fdp.o 00:35:59.720 CC test/nvme/reset/reset.o 00:35:59.720 CC test/nvme/startup/startup.o 00:35:59.720 CC test/nvme/boot_partition/boot_partition.o 00:35:59.720 CC test/nvme/doorbell_aers/doorbell_aers.o 00:35:59.720 CC test/nvme/connect_stress/connect_stress.o 00:35:59.720 CC test/nvme/e2edp/nvme_dp.o 00:35:59.720 CC test/nvme/reserve/reserve.o 00:35:59.980 CC test/nvme/fused_ordering/fused_ordering.o 00:35:59.980 CC test/nvme/sgl/sgl.o 00:35:59.980 CC test/blobfs/mkfs/mkfs.o 00:35:59.980 CC test/accel/dif/dif.o 00:35:59.980 CC test/lvol/esnap/esnap.o 00:35:59.980 CC examples/nvme/nvme_manage/nvme_manage.o 00:35:59.980 CC examples/nvme/hello_world/hello_world.o 00:35:59.980 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:35:59.980 CC examples/nvme/arbitration/arbitration.o 00:35:59.980 CC examples/nvme/abort/abort.o 00:35:59.980 CC examples/nvme/cmb_copy/cmb_copy.o 00:35:59.980 CC examples/nvme/reconnect/reconnect.o 00:35:59.980 CC examples/nvme/hotplug/hotplug.o 00:35:59.980 CC examples/accel/perf/accel_perf.o 00:35:59.980 LINK memory_ut 00:36:00.240 LINK boot_partition 00:36:00.240 LINK connect_stress 00:36:00.240 LINK startup 00:36:00.240 LINK doorbell_aers 00:36:00.240 CC examples/blob/hello_world/hello_blob.o 00:36:00.240 LINK sgl 00:36:00.240 LINK mkfs 00:36:00.240 CC examples/fsdev/hello_world/hello_fsdev.o 00:36:00.240 LINK err_injection 00:36:00.240 CC examples/blob/cli/blobcli.o 00:36:00.240 LINK reserve 00:36:00.240 LINK fused_ordering 00:36:00.240 LINK pmr_persistence 00:36:00.240 LINK reset 00:36:00.240 LINK nvme_dp 00:36:00.240 LINK simple_copy 00:36:00.240 LINK aer 00:36:00.240 LINK fdp 00:36:00.240 LINK overhead 00:36:00.240 LINK cmb_copy 00:36:00.240 LINK hello_world 00:36:00.240 LINK hotplug 00:36:00.240 LINK arbitration 00:36:00.240 LINK nvme_compliance 00:36:00.240 LINK abort 00:36:00.240 LINK reconnect 00:36:00.500 LINK hello_blob 00:36:00.500 LINK hello_fsdev 00:36:00.500 LINK dif 00:36:00.500 LINK nvme_manage 00:36:00.760 LINK accel_perf 00:36:00.760 LINK blobcli 00:36:00.760 LINK iscsi_fuzz 00:36:01.331 CC examples/bdev/hello_world/hello_bdev.o 00:36:01.331 CC test/bdev/bdevio/bdevio.o 00:36:01.331 CC examples/bdev/bdevperf/bdevperf.o 00:36:01.331 LINK cuse 00:36:01.592 LINK hello_bdev 00:36:01.592 LINK bdevio 00:36:02.164 LINK bdevperf 00:36:02.737 CC examples/nvmf/nvmf/nvmf.o 00:36:03.308 LINK nvmf 00:36:05.223 LINK esnap 00:36:05.794 00:36:05.794 real 1m10.821s 00:36:05.794 user 10m28.135s 00:36:05.794 sys 3m50.409s 00:36:05.794 10:47:06 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:36:05.794 10:47:06 make -- common/autotest_common.sh@10 -- $ set +x 00:36:05.794 ************************************ 00:36:05.794 END TEST make 00:36:05.794 ************************************ 00:36:05.794 10:47:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:36:05.794 10:47:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:05.794 10:47:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:05.794 10:47:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:05.794 10:47:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:05.794 10:47:06 -- pm/common@44 -- $ pid=2154397 00:36:05.794 10:47:06 -- pm/common@50 -- $ kill -TERM 2154397 00:36:05.794 10:47:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:05.794 10:47:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:05.794 10:47:06 -- pm/common@44 -- $ pid=2154399 00:36:05.794 10:47:06 -- pm/common@50 -- $ kill -TERM 2154399 00:36:05.794 10:47:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:05.794 10:47:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:05.794 10:47:06 -- pm/common@44 -- $ pid=2154401 00:36:05.794 10:47:06 -- pm/common@50 -- $ kill -TERM 2154401 00:36:05.794 10:47:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:05.794 10:47:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:05.794 10:47:06 -- pm/common@44 -- $ pid=2154426 00:36:05.794 10:47:06 -- pm/common@50 -- $ sudo -E kill -TERM 2154426 00:36:05.794 10:47:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:36:05.794 10:47:06 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:36:05.794 10:47:06 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:05.794 10:47:06 -- common/autotest_common.sh@1711 -- # lcov --version 00:36:05.794 10:47:06 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:06.055 10:47:07 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:06.055 10:47:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:06.055 10:47:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:06.055 10:47:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:06.055 10:47:07 -- scripts/common.sh@336 -- # IFS=.-: 00:36:06.055 10:47:07 -- scripts/common.sh@336 -- # read -ra ver1 00:36:06.055 10:47:07 -- scripts/common.sh@337 -- # IFS=.-: 00:36:06.055 10:47:07 -- scripts/common.sh@337 -- # read -ra ver2 00:36:06.055 10:47:07 -- scripts/common.sh@338 -- # local 'op=<' 00:36:06.055 10:47:07 -- scripts/common.sh@340 -- # ver1_l=2 00:36:06.055 10:47:07 -- scripts/common.sh@341 -- # ver2_l=1 00:36:06.055 10:47:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:06.055 10:47:07 -- scripts/common.sh@344 -- # case "$op" in 00:36:06.055 10:47:07 -- scripts/common.sh@345 -- # : 1 00:36:06.055 10:47:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:06.055 10:47:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:06.055 10:47:07 -- scripts/common.sh@365 -- # decimal 1 00:36:06.055 10:47:07 -- scripts/common.sh@353 -- # local d=1 00:36:06.055 10:47:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:06.055 10:47:07 -- scripts/common.sh@355 -- # echo 1 00:36:06.055 10:47:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:36:06.055 10:47:07 -- scripts/common.sh@366 -- # decimal 2 00:36:06.055 10:47:07 -- scripts/common.sh@353 -- # local d=2 00:36:06.055 10:47:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:06.055 10:47:07 -- scripts/common.sh@355 -- # echo 2 00:36:06.055 10:47:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:36:06.055 10:47:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:06.055 10:47:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:06.055 10:47:07 -- scripts/common.sh@368 -- # return 0 00:36:06.055 10:47:07 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:06.055 10:47:07 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:06.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.055 --rc genhtml_branch_coverage=1 00:36:06.055 --rc genhtml_function_coverage=1 00:36:06.055 --rc genhtml_legend=1 00:36:06.055 --rc geninfo_all_blocks=1 00:36:06.055 --rc geninfo_unexecuted_blocks=1 00:36:06.055 00:36:06.055 ' 00:36:06.055 10:47:07 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:06.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.055 --rc genhtml_branch_coverage=1 00:36:06.055 --rc genhtml_function_coverage=1 00:36:06.055 --rc genhtml_legend=1 00:36:06.055 --rc geninfo_all_blocks=1 00:36:06.055 --rc geninfo_unexecuted_blocks=1 00:36:06.055 00:36:06.055 ' 00:36:06.055 10:47:07 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:06.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.055 --rc genhtml_branch_coverage=1 00:36:06.055 --rc genhtml_function_coverage=1 00:36:06.055 --rc genhtml_legend=1 00:36:06.055 --rc geninfo_all_blocks=1 00:36:06.055 --rc geninfo_unexecuted_blocks=1 00:36:06.055 00:36:06.055 ' 00:36:06.055 10:47:07 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:06.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.055 --rc genhtml_branch_coverage=1 00:36:06.055 --rc genhtml_function_coverage=1 00:36:06.055 --rc genhtml_legend=1 00:36:06.055 --rc geninfo_all_blocks=1 00:36:06.055 --rc geninfo_unexecuted_blocks=1 00:36:06.055 00:36:06.055 ' 00:36:06.055 10:47:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.055 10:47:07 -- nvmf/common.sh@7 -- # uname -s 00:36:06.055 10:47:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.055 10:47:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.055 10:47:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.055 10:47:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.055 10:47:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.055 10:47:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.055 10:47:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.055 10:47:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.055 10:47:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.055 10:47:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.055 10:47:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:36:06.055 10:47:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:36:06.055 10:47:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.055 10:47:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.055 10:47:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.055 10:47:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.055 10:47:07 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.055 10:47:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:36:06.055 10:47:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.055 10:47:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.055 10:47:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.056 10:47:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.056 10:47:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.056 10:47:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.056 10:47:07 -- paths/export.sh@5 -- # export PATH 00:36:06.056 10:47:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.056 10:47:07 -- nvmf/common.sh@51 -- # : 0 00:36:06.056 10:47:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:06.056 10:47:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:06.056 10:47:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.056 10:47:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.056 10:47:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.056 10:47:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:06.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:06.056 10:47:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:06.056 10:47:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:06.056 10:47:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:06.056 10:47:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:36:06.056 10:47:07 -- spdk/autotest.sh@32 -- # uname -s 00:36:06.056 10:47:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:36:06.056 10:47:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:36:06.056 10:47:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:36:06.056 10:47:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:36:06.056 10:47:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:36:06.056 10:47:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:36:06.056 10:47:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:36:06.056 10:47:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:36:06.056 10:47:07 -- spdk/autotest.sh@48 -- # udevadm_pid=2216582 00:36:06.056 10:47:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:36:06.056 10:47:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:36:06.056 10:47:07 -- pm/common@17 -- # local monitor 00:36:06.056 10:47:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.056 10:47:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.056 10:47:07 -- pm/common@21 -- # date +%s 00:36:06.056 10:47:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.056 10:47:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.056 10:47:07 -- pm/common@21 -- # date +%s 00:36:06.056 10:47:07 -- pm/common@21 -- # date +%s 00:36:06.056 10:47:07 -- pm/common@25 -- # sleep 1 00:36:06.056 10:47:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733737627 00:36:06.056 10:47:07 -- pm/common@21 -- # date +%s 00:36:06.056 10:47:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733737627 00:36:06.056 10:47:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733737627 00:36:06.056 10:47:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733737627 00:36:06.056 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733737627_collect-cpu-temp.pm.log 00:36:06.056 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733737627_collect-cpu-load.pm.log 00:36:06.056 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733737627_collect-vmstat.pm.log 00:36:06.056 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733737627_collect-bmc-pm.bmc.pm.log 00:36:06.997 10:47:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:36:06.997 10:47:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:36:06.997 10:47:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:06.997 10:47:08 -- common/autotest_common.sh@10 -- # set +x 00:36:06.997 10:47:08 -- spdk/autotest.sh@59 -- # create_test_list 00:36:06.997 10:47:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:36:06.997 10:47:08 -- common/autotest_common.sh@10 -- # set +x 00:36:07.258 10:47:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:36:07.258 10:47:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:07.258 10:47:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:07.258 10:47:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:07.258 10:47:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:07.258 10:47:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:36:07.258 10:47:08 -- common/autotest_common.sh@1457 -- # uname 00:36:07.258 10:47:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:36:07.258 10:47:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:36:07.258 10:47:08 -- common/autotest_common.sh@1477 -- # uname 00:36:07.258 10:47:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:36:07.258 10:47:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:36:07.258 10:47:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:36:07.258 lcov: LCOV version 1.15 00:36:07.258 10:47:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:36:29.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:36:29.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:36:47.341 10:47:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:36:47.341 10:47:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:47.341 10:47:45 -- common/autotest_common.sh@10 -- # set +x 00:36:47.341 10:47:45 -- spdk/autotest.sh@78 -- # rm -f 00:36:47.341 10:47:45 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:48.282 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:48.282 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:48.282 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:48.282 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:48.282 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:48.282 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:48.282 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:48.542 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:48.542 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:48.542 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:48.542 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:48.542 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:48.542 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:48.542 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:48.542 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:48.804 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:48.804 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:48.804 10:47:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:36:48.804 10:47:49 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:36:48.804 10:47:49 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:36:48.804 10:47:49 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:36:48.804 10:47:49 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:36:48.804 10:47:49 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:36:48.804 10:47:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:36:48.804 10:47:49 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:36:48.804 10:47:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:36:48.804 10:47:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:36:48.804 10:47:49 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:48.804 10:47:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:48.804 10:47:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:48.804 10:47:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:36:48.804 10:47:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:36:48.804 10:47:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:36:48.804 10:47:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:36:48.804 10:47:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:36:48.804 10:47:49 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:36:48.804 No valid GPT data, bailing 00:36:48.804 10:47:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:48.804 10:47:49 -- scripts/common.sh@394 -- # pt= 00:36:48.804 10:47:49 -- scripts/common.sh@395 -- # return 1 00:36:48.804 10:47:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:36:48.804 1+0 records in 00:36:48.804 1+0 records out 00:36:48.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051023 s, 206 MB/s 00:36:48.804 10:47:49 -- spdk/autotest.sh@105 -- # sync 00:36:48.804 10:47:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:36:48.804 10:47:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:36:48.804 10:47:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:36:55.394 10:47:55 -- spdk/autotest.sh@111 -- # uname -s 00:36:55.394 10:47:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:36:55.394 10:47:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:36:55.394 10:47:55 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:36:57.939 Hugepages 00:36:57.939 node hugesize free / total 00:36:57.939 node0 1048576kB 0 / 0 00:36:57.939 node0 2048kB 0 / 0 00:36:57.939 node1 1048576kB 0 / 0 00:36:57.939 node1 2048kB 0 / 0 00:36:57.939 00:36:57.939 Type BDF Vendor Device NUMA Driver Device Block devices 00:36:57.939 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:36:57.939 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:36:57.939 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:36:57.939 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:36:57.939 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:36:57.939 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:36:57.939 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:36:57.939 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:36:57.939 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:36:57.939 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:36:57.939 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:36:57.939 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:36:57.939 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:36:57.939 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:36:57.939 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:36:57.939 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:36:57.939 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:36:57.939 10:47:58 -- spdk/autotest.sh@117 -- # uname -s 00:36:57.939 10:47:58 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:36:57.939 10:47:58 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:36:57.939 10:47:58 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:01.315 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:01.315 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:04.616 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:04.616 10:48:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:37:05.187 10:48:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:37:05.187 10:48:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:37:05.187 10:48:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:37:05.187 10:48:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:37:05.187 10:48:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:05.187 10:48:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:37:05.187 10:48:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:05.187 10:48:06 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:05.187 10:48:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:05.447 10:48:06 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:05.447 10:48:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:37:05.447 10:48:06 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:08.742 Waiting for block devices as requested 00:37:08.742 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:08.742 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:08.742 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:08.742 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:08.742 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:09.001 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:09.001 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:09.001 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:09.261 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:09.261 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:09.261 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:09.520 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:09.520 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:09.520 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:09.780 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:09.780 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:09.780 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:10.040 10:48:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:37:10.040 10:48:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:37:10.040 10:48:11 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:37:10.040 10:48:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:37:10.040 10:48:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:37:10.040 10:48:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:37:10.040 10:48:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:37:10.040 10:48:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:37:10.040 10:48:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:37:10.040 10:48:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:37:10.040 10:48:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:37:10.040 10:48:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:37:10.040 10:48:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:37:10.040 10:48:11 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:37:10.040 10:48:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:37:10.040 10:48:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:37:10.040 10:48:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:37:10.040 10:48:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:37:10.040 10:48:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:37:10.040 10:48:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:37:10.040 10:48:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:37:10.040 10:48:11 -- common/autotest_common.sh@1543 -- # continue 00:37:10.041 10:48:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:37:10.041 10:48:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.041 10:48:11 -- common/autotest_common.sh@10 -- # set +x 00:37:10.041 10:48:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:37:10.041 10:48:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:10.041 10:48:11 -- common/autotest_common.sh@10 -- # set +x 00:37:10.041 10:48:11 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:13.359 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:13.359 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:13.359 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:13.359 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:13.619 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:16.916 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:16.916 10:48:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:37:16.916 10:48:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:16.916 10:48:17 -- common/autotest_common.sh@10 -- # set +x 00:37:16.916 10:48:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:37:16.916 10:48:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:37:16.916 10:48:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:37:16.916 10:48:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:37:16.916 10:48:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:37:16.916 10:48:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:37:16.916 10:48:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:37:16.916 10:48:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:37:16.916 10:48:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:16.916 10:48:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:37:16.916 10:48:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:16.916 10:48:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:16.916 10:48:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:16.916 10:48:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:16.916 10:48:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:37:16.916 10:48:18 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:37:16.916 10:48:18 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:37:16.916 10:48:18 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:37:16.916 10:48:18 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:37:16.916 10:48:18 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:37:16.916 10:48:18 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:37:16.916 10:48:18 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:37:16.916 10:48:18 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:37:16.916 10:48:18 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2231506 00:37:16.916 10:48:18 -- common/autotest_common.sh@1585 -- # waitforlisten 2231506 00:37:16.916 10:48:18 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:16.916 10:48:18 -- common/autotest_common.sh@835 -- # '[' -z 2231506 ']' 00:37:16.916 10:48:18 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.916 10:48:18 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.916 10:48:18 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.916 10:48:18 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.917 10:48:18 -- common/autotest_common.sh@10 -- # set +x 00:37:17.176 [2024-12-09 10:48:18.153169] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:37:17.176 [2024-12-09 10:48:18.153251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231506 ] 00:37:17.176 [2024-12-09 10:48:18.279615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.176 [2024-12-09 10:48:18.332232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.117 10:48:19 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:18.117 10:48:19 -- common/autotest_common.sh@868 -- # return 0 00:37:18.117 10:48:19 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:37:18.117 10:48:19 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:37:18.117 10:48:19 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:37:21.411 nvme0n1 00:37:21.411 10:48:22 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:37:21.411 [2024-12-09 10:48:22.434451] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:37:21.411 request: 00:37:21.411 { 00:37:21.411 "nvme_ctrlr_name": "nvme0", 00:37:21.411 "password": "test", 00:37:21.411 "method": "bdev_nvme_opal_revert", 00:37:21.411 "req_id": 1 00:37:21.411 } 00:37:21.411 Got JSON-RPC error response 00:37:21.411 response: 00:37:21.411 { 00:37:21.412 "code": -32602, 00:37:21.412 "message": "Invalid parameters" 00:37:21.412 } 00:37:21.412 10:48:22 -- common/autotest_common.sh@1591 -- # true 00:37:21.412 10:48:22 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:37:21.412 10:48:22 -- common/autotest_common.sh@1595 -- # killprocess 2231506 00:37:21.412 10:48:22 -- common/autotest_common.sh@954 -- # '[' -z 2231506 ']' 00:37:21.412 10:48:22 -- common/autotest_common.sh@958 -- # kill -0 2231506 00:37:21.412 10:48:22 -- common/autotest_common.sh@959 -- # uname 00:37:21.412 10:48:22 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:21.412 10:48:22 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2231506 00:37:21.412 10:48:22 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:21.412 10:48:22 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:21.412 10:48:22 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2231506' 00:37:21.412 killing process with pid 2231506 00:37:21.412 10:48:22 -- common/autotest_common.sh@973 -- # kill 2231506 00:37:21.412 10:48:22 -- common/autotest_common.sh@978 -- # wait 2231506 00:37:25.611 10:48:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:37:25.611 10:48:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:37:25.611 10:48:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:37:25.611 10:48:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:37:25.611 10:48:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:37:25.611 10:48:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:25.611 10:48:26 -- common/autotest_common.sh@10 -- # set +x 00:37:25.611 10:48:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:37:25.611 10:48:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:37:25.611 10:48:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:25.611 10:48:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:25.611 10:48:26 -- common/autotest_common.sh@10 -- # set +x 00:37:25.611 ************************************ 00:37:25.611 START TEST env 00:37:25.611 ************************************ 00:37:25.611 10:48:26 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:37:25.611 * Looking for test storage... 00:37:25.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:37:25.612 10:48:26 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:25.612 10:48:26 env -- common/autotest_common.sh@1711 -- # lcov --version 00:37:25.612 10:48:26 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:25.612 10:48:26 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:25.612 10:48:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:25.612 10:48:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:25.612 10:48:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:25.612 10:48:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:37:25.612 10:48:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:37:25.612 10:48:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:37:25.612 10:48:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:37:25.612 10:48:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:37:25.612 10:48:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:37:25.612 10:48:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:37:25.612 10:48:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:25.612 10:48:26 env -- scripts/common.sh@344 -- # case "$op" in 00:37:25.612 10:48:26 env -- scripts/common.sh@345 -- # : 1 00:37:25.612 10:48:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:25.612 10:48:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:25.873 10:48:26 env -- scripts/common.sh@365 -- # decimal 1 00:37:25.873 10:48:26 env -- scripts/common.sh@353 -- # local d=1 00:37:25.873 10:48:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:25.873 10:48:26 env -- scripts/common.sh@355 -- # echo 1 00:37:25.873 10:48:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:37:25.873 10:48:26 env -- scripts/common.sh@366 -- # decimal 2 00:37:25.873 10:48:26 env -- scripts/common.sh@353 -- # local d=2 00:37:25.873 10:48:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:25.873 10:48:26 env -- scripts/common.sh@355 -- # echo 2 00:37:25.873 10:48:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:37:25.873 10:48:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:25.873 10:48:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:25.873 10:48:26 env -- scripts/common.sh@368 -- # return 0 00:37:25.873 10:48:26 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:25.873 10:48:26 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:25.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.873 --rc genhtml_branch_coverage=1 00:37:25.873 --rc genhtml_function_coverage=1 00:37:25.873 --rc genhtml_legend=1 00:37:25.873 --rc geninfo_all_blocks=1 00:37:25.873 --rc geninfo_unexecuted_blocks=1 00:37:25.873 00:37:25.873 ' 00:37:25.873 10:48:26 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:25.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.873 --rc genhtml_branch_coverage=1 00:37:25.873 --rc genhtml_function_coverage=1 00:37:25.873 --rc genhtml_legend=1 00:37:25.873 --rc geninfo_all_blocks=1 00:37:25.873 --rc geninfo_unexecuted_blocks=1 00:37:25.873 00:37:25.873 ' 00:37:25.873 10:48:26 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:25.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.873 --rc genhtml_branch_coverage=1 00:37:25.873 --rc genhtml_function_coverage=1 00:37:25.873 --rc genhtml_legend=1 00:37:25.873 --rc geninfo_all_blocks=1 00:37:25.873 --rc geninfo_unexecuted_blocks=1 00:37:25.873 00:37:25.873 ' 00:37:25.873 10:48:26 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:25.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.873 --rc genhtml_branch_coverage=1 00:37:25.873 --rc genhtml_function_coverage=1 00:37:25.873 --rc genhtml_legend=1 00:37:25.873 --rc geninfo_all_blocks=1 00:37:25.873 --rc geninfo_unexecuted_blocks=1 00:37:25.873 00:37:25.873 ' 00:37:25.873 10:48:26 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:37:25.873 10:48:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:25.873 10:48:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:25.873 10:48:26 env -- common/autotest_common.sh@10 -- # set +x 00:37:25.873 ************************************ 00:37:25.873 START TEST env_memory 00:37:25.873 ************************************ 00:37:25.873 10:48:26 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:37:25.873 00:37:25.873 00:37:25.873 CUnit - A unit testing framework for C - Version 2.1-3 00:37:25.873 http://cunit.sourceforge.net/ 00:37:25.873 00:37:25.873 00:37:25.873 Suite: mem_map_2mb 00:37:25.873 Test: alloc and free memory map ...[2024-12-09 10:48:26.854193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 310:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:37:25.873 passed 00:37:25.873 Test: mem map translation ...[2024-12-09 10:48:26.874265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:37:25.873 [2024-12-09 10:48:26.874285] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:37:25.873 [2024-12-09 10:48:26.874332] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 622:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:37:25.873 [2024-12-09 10:48:26.874340] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 638:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:37:25.873 passed 00:37:25.874 Test: mem map registration ...[2024-12-09 10:48:26.914523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:37:25.874 [2024-12-09 10:48:26.914542] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:37:25.874 passed 00:37:25.874 Test: mem map adjacent registrations ...passed 00:37:25.874 Suite: mem_map_4kb 00:37:25.874 Test: alloc and free memory map ...[2024-12-09 10:48:27.020148] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 310:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:37:25.874 passed 00:37:25.874 Test: mem map translation ...[2024-12-09 10:48:27.043641] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=4096 len=1234 00:37:25.874 [2024-12-09 10:48:27.043665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=4096 00:37:26.135 [2024-12-09 10:48:27.063145] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 622:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:37:26.135 [2024-12-09 10:48:27.063158] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 638:spdk_mem_map_set_translation: *ERROR*: could not get 0xfffffffff000 map 00:37:26.135 passed 00:37:26.135 Test: mem map registration ...[2024-12-09 10:48:27.135303] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=1000 len=1234 00:37:26.135 [2024-12-09 10:48:27.135328] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=4096 00:37:26.135 passed 00:37:26.135 Test: mem map adjacent registrations ...passed 00:37:26.135 00:37:26.135 Run Summary: Type Total Ran Passed Failed Inactive 00:37:26.135 suites 2 2 n/a 0 0 00:37:26.135 tests 8 8 8 0 0 00:37:26.135 asserts 304 304 304 0 n/a 00:37:26.135 00:37:26.135 Elapsed time = 0.389 seconds 00:37:26.135 00:37:26.135 real 0m0.397s 00:37:26.135 user 0m0.384s 00:37:26.135 sys 0m0.012s 00:37:26.135 10:48:27 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:26.135 10:48:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:37:26.135 ************************************ 00:37:26.135 END TEST env_memory 00:37:26.135 ************************************ 00:37:26.135 10:48:27 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:37:26.135 10:48:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:26.135 10:48:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:26.135 10:48:27 env -- common/autotest_common.sh@10 -- # set +x 00:37:26.135 ************************************ 00:37:26.135 START TEST env_vtophys 00:37:26.135 ************************************ 00:37:26.135 10:48:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:37:26.397 EAL: lib.eal log level changed from notice to debug 00:37:26.397 EAL: Detected lcore 0 as core 0 on socket 0 00:37:26.397 EAL: Detected lcore 1 as core 1 on socket 0 00:37:26.397 EAL: Detected lcore 2 as core 2 on socket 0 00:37:26.397 EAL: Detected lcore 3 as core 3 on socket 0 00:37:26.397 EAL: Detected lcore 4 as core 4 on socket 0 00:37:26.397 EAL: Detected lcore 5 as core 8 on socket 0 00:37:26.397 EAL: Detected lcore 6 as core 9 on socket 0 00:37:26.397 EAL: Detected lcore 7 as core 10 on socket 0 00:37:26.397 EAL: Detected lcore 8 as core 11 on socket 0 00:37:26.397 EAL: Detected lcore 9 as core 16 on socket 0 00:37:26.397 EAL: Detected lcore 10 as core 17 on socket 0 00:37:26.397 EAL: Detected lcore 11 as core 18 on socket 0 00:37:26.397 EAL: Detected lcore 12 as core 19 on socket 0 00:37:26.397 EAL: Detected lcore 13 as core 20 on socket 0 00:37:26.397 EAL: Detected lcore 14 as core 24 on socket 0 00:37:26.397 EAL: Detected lcore 15 as core 25 on socket 0 00:37:26.397 EAL: Detected lcore 16 as core 26 on socket 0 00:37:26.397 EAL: Detected lcore 17 as core 27 on socket 0 00:37:26.397 EAL: Detected lcore 18 as core 0 on socket 1 00:37:26.397 EAL: Detected lcore 19 as core 1 on socket 1 00:37:26.397 EAL: Detected lcore 20 as core 2 on socket 1 00:37:26.397 EAL: Detected lcore 21 as core 3 on socket 1 00:37:26.397 EAL: Detected lcore 22 as core 4 on socket 1 00:37:26.397 EAL: Detected lcore 23 as core 8 on socket 1 00:37:26.397 EAL: Detected lcore 24 as core 9 on socket 1 00:37:26.397 EAL: Detected lcore 25 as core 10 on socket 1 00:37:26.397 EAL: Detected lcore 26 as core 11 on socket 1 00:37:26.397 EAL: Detected lcore 27 as core 16 on socket 1 00:37:26.397 EAL: Detected lcore 28 as core 17 on socket 1 00:37:26.397 EAL: Detected lcore 29 as core 18 on socket 1 00:37:26.397 EAL: Detected lcore 30 as core 19 on socket 1 00:37:26.397 EAL: Detected lcore 31 as core 20 on socket 1 00:37:26.397 EAL: Detected lcore 32 as core 24 on socket 1 00:37:26.397 EAL: Detected lcore 33 as core 25 on socket 1 00:37:26.397 EAL: Detected lcore 34 as core 26 on socket 1 00:37:26.397 EAL: Detected lcore 35 as core 27 on socket 1 00:37:26.397 EAL: Detected lcore 36 as core 0 on socket 0 00:37:26.398 EAL: Detected lcore 37 as core 1 on socket 0 00:37:26.398 EAL: Detected lcore 38 as core 2 on socket 0 00:37:26.398 EAL: Detected lcore 39 as core 3 on socket 0 00:37:26.398 EAL: Detected lcore 40 as core 4 on socket 0 00:37:26.398 EAL: Detected lcore 41 as core 8 on socket 0 00:37:26.398 EAL: Detected lcore 42 as core 9 on socket 0 00:37:26.398 EAL: Detected lcore 43 as core 10 on socket 0 00:37:26.398 EAL: Detected lcore 44 as core 11 on socket 0 00:37:26.398 EAL: Detected lcore 45 as core 16 on socket 0 00:37:26.398 EAL: Detected lcore 46 as core 17 on socket 0 00:37:26.398 EAL: Detected lcore 47 as core 18 on socket 0 00:37:26.398 EAL: Detected lcore 48 as core 19 on socket 0 00:37:26.398 EAL: Detected lcore 49 as core 20 on socket 0 00:37:26.398 EAL: Detected lcore 50 as core 24 on socket 0 00:37:26.398 EAL: Detected lcore 51 as core 25 on socket 0 00:37:26.398 EAL: Detected lcore 52 as core 26 on socket 0 00:37:26.398 EAL: Detected lcore 53 as core 27 on socket 0 00:37:26.398 EAL: Detected lcore 54 as core 0 on socket 1 00:37:26.398 EAL: Detected lcore 55 as core 1 on socket 1 00:37:26.398 EAL: Detected lcore 56 as core 2 on socket 1 00:37:26.398 EAL: Detected lcore 57 as core 3 on socket 1 00:37:26.398 EAL: Detected lcore 58 as core 4 on socket 1 00:37:26.398 EAL: Detected lcore 59 as core 8 on socket 1 00:37:26.398 EAL: Detected lcore 60 as core 9 on socket 1 00:37:26.398 EAL: Detected lcore 61 as core 10 on socket 1 00:37:26.398 EAL: Detected lcore 62 as core 11 on socket 1 00:37:26.398 EAL: Detected lcore 63 as core 16 on socket 1 00:37:26.398 EAL: Detected lcore 64 as core 17 on socket 1 00:37:26.398 EAL: Detected lcore 65 as core 18 on socket 1 00:37:26.398 EAL: Detected lcore 66 as core 19 on socket 1 00:37:26.398 EAL: Detected lcore 67 as core 20 on socket 1 00:37:26.398 EAL: Detected lcore 68 as core 24 on socket 1 00:37:26.398 EAL: Detected lcore 69 as core 25 on socket 1 00:37:26.398 EAL: Detected lcore 70 as core 26 on socket 1 00:37:26.398 EAL: Detected lcore 71 as core 27 on socket 1 00:37:26.398 EAL: Maximum logical cores by configuration: 128 00:37:26.398 EAL: Detected CPU lcores: 72 00:37:26.398 EAL: Detected NUMA nodes: 2 00:37:26.398 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:37:26.398 EAL: Detected shared linkage of DPDK 00:37:26.398 EAL: No shared files mode enabled, IPC will be disabled 00:37:26.398 EAL: Bus pci wants IOVA as 'DC' 00:37:26.398 EAL: Buses did not request a specific IOVA mode. 00:37:26.398 EAL: IOMMU is available, selecting IOVA as VA mode. 00:37:26.398 EAL: Selected IOVA mode 'VA' 00:37:26.398 EAL: Probing VFIO support... 00:37:26.398 EAL: IOMMU type 1 (Type 1) is supported 00:37:26.398 EAL: IOMMU type 7 (sPAPR) is not supported 00:37:26.398 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:37:26.398 EAL: VFIO support initialized 00:37:26.398 EAL: Ask a virtual area of 0x2e000 bytes 00:37:26.398 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:37:26.398 EAL: Setting up physically contiguous memory... 00:37:26.398 EAL: Setting maximum number of open files to 524288 00:37:26.398 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:37:26.398 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:37:26.398 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:37:26.398 EAL: Ask a virtual area of 0x61000 bytes 00:37:26.398 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:37:26.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:37:26.398 EAL: Ask a virtual area of 0x400000000 bytes 00:37:26.398 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:37:26.398 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:37:26.398 EAL: Ask a virtual area of 0x61000 bytes 00:37:26.398 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:37:26.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:37:26.398 EAL: Ask a virtual area of 0x400000000 bytes 00:37:26.398 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:37:26.398 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:37:26.398 EAL: Ask a virtual area of 0x61000 bytes 00:37:26.398 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:37:26.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:37:26.398 EAL: Ask a virtual area of 0x400000000 bytes 00:37:26.398 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:37:26.398 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:37:26.398 EAL: Ask a virtual area of 0x61000 bytes 00:37:26.398 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:37:26.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:37:26.398 EAL: Ask a virtual area of 0x400000000 bytes 00:37:26.398 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:37:26.398 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:37:26.398 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:37:26.398 EAL: Ask a virtual area of 0x61000 bytes 00:37:26.398 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:37:26.398 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:37:26.398 EAL: Ask a virtual area of 0x400000000 bytes 00:37:26.398 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:37:26.398 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:37:26.398 EAL: Ask a virtual area of 0x61000 bytes 00:37:26.398 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:37:26.398 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:37:26.398 EAL: Ask a virtual area of 0x400000000 bytes 00:37:26.398 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:37:26.398 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:37:26.398 EAL: Ask a virtual area of 0x61000 bytes 00:37:26.398 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:37:26.398 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:37:26.398 EAL: Ask a virtual area of 0x400000000 bytes 00:37:26.398 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:37:26.398 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:37:26.398 EAL: Ask a virtual area of 0x61000 bytes 00:37:26.398 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:37:26.398 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:37:26.398 EAL: Ask a virtual area of 0x400000000 bytes 00:37:26.398 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:37:26.398 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:37:26.398 EAL: Hugepages will be freed exactly as allocated. 00:37:26.398 EAL: No shared files mode enabled, IPC is disabled 00:37:26.398 EAL: No shared files mode enabled, IPC is disabled 00:37:26.398 EAL: TSC frequency is ~2300000 KHz 00:37:26.398 EAL: Main lcore 0 is ready (tid=7f3804d44a00;cpuset=[0]) 00:37:26.398 EAL: Trying to obtain current memory policy. 00:37:26.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:26.398 EAL: Restoring previous memory policy: 0 00:37:26.398 EAL: request: mp_malloc_sync 00:37:26.398 EAL: No shared files mode enabled, IPC is disabled 00:37:26.398 EAL: Heap on socket 0 was expanded by 2MB 00:37:26.398 EAL: No shared files mode enabled, IPC is disabled 00:37:26.398 EAL: No PCI address specified using 'addr=' in: bus=pci 00:37:26.398 EAL: Mem event callback 'spdk:(nil)' registered 00:37:26.398 00:37:26.398 00:37:26.398 CUnit - A unit testing framework for C - Version 2.1-3 00:37:26.398 http://cunit.sourceforge.net/ 00:37:26.398 00:37:26.398 00:37:26.398 Suite: components_suite 00:37:26.398 Test: vtophys_malloc_test ...passed 00:37:26.398 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:37:26.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:26.398 EAL: Restoring previous memory policy: 4 00:37:26.398 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.398 EAL: request: mp_malloc_sync 00:37:26.398 EAL: No shared files mode enabled, IPC is disabled 00:37:26.398 EAL: Heap on socket 0 was expanded by 4MB 00:37:26.398 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.398 EAL: request: mp_malloc_sync 00:37:26.398 EAL: No shared files mode enabled, IPC is disabled 00:37:26.398 EAL: Heap on socket 0 was shrunk by 4MB 00:37:26.398 EAL: Trying to obtain current memory policy. 00:37:26.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:26.398 EAL: Restoring previous memory policy: 4 00:37:26.398 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.398 EAL: request: mp_malloc_sync 00:37:26.398 EAL: No shared files mode enabled, IPC is disabled 00:37:26.398 EAL: Heap on socket 0 was expanded by 6MB 00:37:26.398 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.398 EAL: request: mp_malloc_sync 00:37:26.399 EAL: No shared files mode enabled, IPC is disabled 00:37:26.399 EAL: Heap on socket 0 was shrunk by 6MB 00:37:26.399 EAL: Trying to obtain current memory policy. 00:37:26.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:26.399 EAL: Restoring previous memory policy: 4 00:37:26.399 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.399 EAL: request: mp_malloc_sync 00:37:26.399 EAL: No shared files mode enabled, IPC is disabled 00:37:26.399 EAL: Heap on socket 0 was expanded by 10MB 00:37:26.399 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.399 EAL: request: mp_malloc_sync 00:37:26.399 EAL: No shared files mode enabled, IPC is disabled 00:37:26.399 EAL: Heap on socket 0 was shrunk by 10MB 00:37:26.399 EAL: Trying to obtain current memory policy. 00:37:26.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:26.399 EAL: Restoring previous memory policy: 4 00:37:26.399 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.399 EAL: request: mp_malloc_sync 00:37:26.399 EAL: No shared files mode enabled, IPC is disabled 00:37:26.399 EAL: Heap on socket 0 was expanded by 18MB 00:37:26.399 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.399 EAL: request: mp_malloc_sync 00:37:26.399 EAL: No shared files mode enabled, IPC is disabled 00:37:26.399 EAL: Heap on socket 0 was shrunk by 18MB 00:37:26.399 EAL: Trying to obtain current memory policy. 00:37:26.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:26.399 EAL: Restoring previous memory policy: 4 00:37:26.399 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.399 EAL: request: mp_malloc_sync 00:37:26.399 EAL: No shared files mode enabled, IPC is disabled 00:37:26.399 EAL: Heap on socket 0 was expanded by 34MB 00:37:26.399 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.399 EAL: request: mp_malloc_sync 00:37:26.399 EAL: No shared files mode enabled, IPC is disabled 00:37:26.399 EAL: Heap on socket 0 was shrunk by 34MB 00:37:26.399 EAL: Trying to obtain current memory policy. 00:37:26.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:26.399 EAL: Restoring previous memory policy: 4 00:37:26.399 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.399 EAL: request: mp_malloc_sync 00:37:26.399 EAL: No shared files mode enabled, IPC is disabled 00:37:26.399 EAL: Heap on socket 0 was expanded by 66MB 00:37:26.399 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.399 EAL: request: mp_malloc_sync 00:37:26.399 EAL: No shared files mode enabled, IPC is disabled 00:37:26.399 EAL: Heap on socket 0 was shrunk by 66MB 00:37:26.399 EAL: Trying to obtain current memory policy. 00:37:26.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:26.399 EAL: Restoring previous memory policy: 4 00:37:26.399 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.399 EAL: request: mp_malloc_sync 00:37:26.399 EAL: No shared files mode enabled, IPC is disabled 00:37:26.399 EAL: Heap on socket 0 was expanded by 130MB 00:37:26.399 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.659 EAL: request: mp_malloc_sync 00:37:26.659 EAL: No shared files mode enabled, IPC is disabled 00:37:26.659 EAL: Heap on socket 0 was shrunk by 130MB 00:37:26.659 EAL: Trying to obtain current memory policy. 00:37:26.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:26.659 EAL: Restoring previous memory policy: 4 00:37:26.659 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.659 EAL: request: mp_malloc_sync 00:37:26.659 EAL: No shared files mode enabled, IPC is disabled 00:37:26.659 EAL: Heap on socket 0 was expanded by 258MB 00:37:26.659 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.659 EAL: request: mp_malloc_sync 00:37:26.659 EAL: No shared files mode enabled, IPC is disabled 00:37:26.659 EAL: Heap on socket 0 was shrunk by 258MB 00:37:26.659 EAL: Trying to obtain current memory policy. 00:37:26.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:26.920 EAL: Restoring previous memory policy: 4 00:37:26.920 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.920 EAL: request: mp_malloc_sync 00:37:26.920 EAL: No shared files mode enabled, IPC is disabled 00:37:26.920 EAL: Heap on socket 0 was expanded by 514MB 00:37:26.920 EAL: Calling mem event callback 'spdk:(nil)' 00:37:26.920 EAL: request: mp_malloc_sync 00:37:26.920 EAL: No shared files mode enabled, IPC is disabled 00:37:26.920 EAL: Heap on socket 0 was shrunk by 514MB 00:37:26.920 EAL: Trying to obtain current memory policy. 00:37:26.920 EAL: Setting policy MPOL_PREFERRED for socket 0 00:37:27.181 EAL: Restoring previous memory policy: 4 00:37:27.181 EAL: Calling mem event callback 'spdk:(nil)' 00:37:27.181 EAL: request: mp_malloc_sync 00:37:27.181 EAL: No shared files mode enabled, IPC is disabled 00:37:27.181 EAL: Heap on socket 0 was expanded by 1026MB 00:37:27.441 EAL: Calling mem event callback 'spdk:(nil)' 00:37:27.702 EAL: request: mp_malloc_sync 00:37:27.702 EAL: No shared files mode enabled, IPC is disabled 00:37:27.702 EAL: Heap on socket 0 was shrunk by 1026MB 00:37:27.702 passed 00:37:27.702 00:37:27.702 Run Summary: Type Total Ran Passed Failed Inactive 00:37:27.702 suites 1 1 n/a 0 0 00:37:27.702 tests 2 2 2 0 0 00:37:27.702 asserts 497 497 497 0 n/a 00:37:27.702 00:37:27.702 Elapsed time = 1.183 seconds 00:37:27.702 EAL: Calling mem event callback 'spdk:(nil)' 00:37:27.702 EAL: request: mp_malloc_sync 00:37:27.702 EAL: No shared files mode enabled, IPC is disabled 00:37:27.702 EAL: Heap on socket 0 was shrunk by 2MB 00:37:27.702 EAL: No shared files mode enabled, IPC is disabled 00:37:27.702 EAL: No shared files mode enabled, IPC is disabled 00:37:27.702 EAL: No shared files mode enabled, IPC is disabled 00:37:27.702 00:37:27.702 real 0m1.377s 00:37:27.702 user 0m0.789s 00:37:27.702 sys 0m0.551s 00:37:27.702 10:48:28 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.702 10:48:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:37:27.702 ************************************ 00:37:27.702 END TEST env_vtophys 00:37:27.702 ************************************ 00:37:27.703 10:48:28 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:37:27.703 10:48:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:27.703 10:48:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:27.703 10:48:28 env -- common/autotest_common.sh@10 -- # set +x 00:37:27.703 ************************************ 00:37:27.703 START TEST env_pci 00:37:27.703 ************************************ 00:37:27.703 10:48:28 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:37:27.703 00:37:27.703 00:37:27.703 CUnit - A unit testing framework for C - Version 2.1-3 00:37:27.703 http://cunit.sourceforge.net/ 00:37:27.703 00:37:27.703 00:37:27.703 Suite: pci 00:37:27.703 Test: pci_hook ...[2024-12-09 10:48:28.775231] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2232916 has claimed it 00:37:27.703 EAL: Cannot find device (10000:00:01.0) 00:37:27.703 EAL: Failed to attach device on primary process 00:37:27.703 passed 00:37:27.703 00:37:27.703 Run Summary: Type Total Ran Passed Failed Inactive 00:37:27.703 suites 1 1 n/a 0 0 00:37:27.703 tests 1 1 1 0 0 00:37:27.703 asserts 25 25 25 0 n/a 00:37:27.703 00:37:27.703 Elapsed time = 0.039 seconds 00:37:27.703 00:37:27.703 real 0m0.063s 00:37:27.703 user 0m0.023s 00:37:27.703 sys 0m0.040s 00:37:27.703 10:48:28 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.703 10:48:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:37:27.703 ************************************ 00:37:27.703 END TEST env_pci 00:37:27.703 ************************************ 00:37:27.703 10:48:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:37:27.703 10:48:28 env -- env/env.sh@15 -- # uname 00:37:27.703 10:48:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:37:27.703 10:48:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:37:27.703 10:48:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:37:27.703 10:48:28 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:27.703 10:48:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:27.703 10:48:28 env -- common/autotest_common.sh@10 -- # set +x 00:37:27.963 ************************************ 00:37:27.963 START TEST env_dpdk_post_init 00:37:27.963 ************************************ 00:37:27.963 10:48:28 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:37:27.963 EAL: Detected CPU lcores: 72 00:37:27.963 EAL: Detected NUMA nodes: 2 00:37:27.963 EAL: Detected shared linkage of DPDK 00:37:27.963 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:37:27.963 EAL: Selected IOVA mode 'VA' 00:37:27.963 EAL: VFIO support initialized 00:37:27.963 TELEMETRY: No legacy callbacks, legacy socket not created 00:37:27.963 EAL: Using IOMMU type 1 (Type 1) 00:37:27.963 EAL: Ignore mapping IO port bar(1) 00:37:27.963 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:37:27.963 EAL: Ignore mapping IO port bar(1) 00:37:27.963 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:37:27.963 EAL: Ignore mapping IO port bar(1) 00:37:27.963 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:37:28.224 EAL: Ignore mapping IO port bar(1) 00:37:28.224 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:37:28.224 EAL: Ignore mapping IO port bar(1) 00:37:28.224 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:37:28.224 EAL: Ignore mapping IO port bar(1) 00:37:28.224 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:37:28.224 EAL: Ignore mapping IO port bar(1) 00:37:28.224 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:37:28.224 EAL: Ignore mapping IO port bar(1) 00:37:28.224 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:37:28.795 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:37:28.795 EAL: Ignore mapping IO port bar(1) 00:37:28.795 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:37:28.795 EAL: Ignore mapping IO port bar(1) 00:37:28.795 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:37:29.078 EAL: Ignore mapping IO port bar(1) 00:37:29.078 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:37:29.078 EAL: Ignore mapping IO port bar(1) 00:37:29.078 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:37:29.078 EAL: Ignore mapping IO port bar(1) 00:37:29.078 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:37:29.078 EAL: Ignore mapping IO port bar(1) 00:37:29.078 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:37:29.078 EAL: Ignore mapping IO port bar(1) 00:37:29.078 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:37:29.078 EAL: Ignore mapping IO port bar(1) 00:37:29.078 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:37:34.384 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:37:34.384 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:37:34.644 Starting DPDK initialization... 00:37:34.644 Starting SPDK post initialization... 00:37:34.644 SPDK NVMe probe 00:37:34.644 Attaching to 0000:5e:00.0 00:37:34.644 Attached to 0000:5e:00.0 00:37:34.644 Cleaning up... 00:37:34.644 00:37:34.644 real 0m6.746s 00:37:34.644 user 0m4.963s 00:37:34.644 sys 0m0.838s 00:37:34.644 10:48:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.644 10:48:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:37:34.644 ************************************ 00:37:34.644 END TEST env_dpdk_post_init 00:37:34.644 ************************************ 00:37:34.644 10:48:35 env -- env/env.sh@26 -- # uname 00:37:34.644 10:48:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:37:34.644 10:48:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:37:34.644 10:48:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:34.644 10:48:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.644 10:48:35 env -- common/autotest_common.sh@10 -- # set +x 00:37:34.644 ************************************ 00:37:34.644 START TEST env_mem_callbacks 00:37:34.644 ************************************ 00:37:34.644 10:48:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:37:34.644 EAL: Detected CPU lcores: 72 00:37:34.644 EAL: Detected NUMA nodes: 2 00:37:34.644 EAL: Detected shared linkage of DPDK 00:37:34.644 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:37:34.644 EAL: Selected IOVA mode 'VA' 00:37:34.644 EAL: VFIO support initialized 00:37:34.644 TELEMETRY: No legacy callbacks, legacy socket not created 00:37:34.644 00:37:34.644 00:37:34.644 CUnit - A unit testing framework for C - Version 2.1-3 00:37:34.644 http://cunit.sourceforge.net/ 00:37:34.644 00:37:34.644 00:37:34.644 Suite: memory 00:37:34.644 Test: test ... 00:37:34.644 register 0x200000200000 2097152 00:37:34.644 malloc 3145728 00:37:34.644 register 0x200000400000 4194304 00:37:34.644 buf 0x200000500000 len 3145728 PASSED 00:37:34.644 malloc 64 00:37:34.644 buf 0x2000004fff40 len 64 PASSED 00:37:34.644 malloc 4194304 00:37:34.644 register 0x200000800000 6291456 00:37:34.644 buf 0x200000a00000 len 4194304 PASSED 00:37:34.644 free 0x200000500000 3145728 00:37:34.644 free 0x2000004fff40 64 00:37:34.644 unregister 0x200000400000 4194304 PASSED 00:37:34.644 free 0x200000a00000 4194304 00:37:34.644 unregister 0x200000800000 6291456 PASSED 00:37:34.644 malloc 8388608 00:37:34.644 register 0x200000400000 10485760 00:37:34.644 buf 0x200000600000 len 8388608 PASSED 00:37:34.644 free 0x200000600000 8388608 00:37:34.644 unregister 0x200000400000 10485760 PASSED 00:37:34.644 passed 00:37:34.644 00:37:34.644 Run Summary: Type Total Ran Passed Failed Inactive 00:37:34.644 suites 1 1 n/a 0 0 00:37:34.644 tests 1 1 1 0 0 00:37:34.644 asserts 15 15 15 0 n/a 00:37:34.644 00:37:34.644 Elapsed time = 0.009 seconds 00:37:34.644 00:37:34.644 real 0m0.059s 00:37:34.644 user 0m0.018s 00:37:34.644 sys 0m0.041s 00:37:34.644 10:48:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.644 10:48:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:37:34.644 ************************************ 00:37:34.644 END TEST env_mem_callbacks 00:37:34.644 ************************************ 00:37:34.904 00:37:34.904 real 0m9.248s 00:37:34.904 user 0m6.443s 00:37:34.904 sys 0m1.870s 00:37:34.904 10:48:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.904 10:48:35 env -- common/autotest_common.sh@10 -- # set +x 00:37:34.904 ************************************ 00:37:34.904 END TEST env 00:37:34.904 ************************************ 00:37:34.904 10:48:35 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:37:34.904 10:48:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:34.904 10:48:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.904 10:48:35 -- common/autotest_common.sh@10 -- # set +x 00:37:34.904 ************************************ 00:37:34.904 START TEST rpc 00:37:34.904 ************************************ 00:37:34.904 10:48:35 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:37:34.904 * Looking for test storage... 00:37:34.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:37:34.904 10:48:36 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:34.904 10:48:36 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:37:34.904 10:48:36 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:35.164 10:48:36 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:35.164 10:48:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:35.164 10:48:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:35.164 10:48:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:35.164 10:48:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:37:35.164 10:48:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:37:35.164 10:48:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:37:35.164 10:48:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:37:35.164 10:48:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:37:35.164 10:48:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:37:35.164 10:48:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:37:35.164 10:48:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:35.164 10:48:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:37:35.164 10:48:36 rpc -- scripts/common.sh@345 -- # : 1 00:37:35.164 10:48:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:35.164 10:48:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:35.164 10:48:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:37:35.164 10:48:36 rpc -- scripts/common.sh@353 -- # local d=1 00:37:35.164 10:48:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:35.164 10:48:36 rpc -- scripts/common.sh@355 -- # echo 1 00:37:35.164 10:48:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:37:35.164 10:48:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:37:35.164 10:48:36 rpc -- scripts/common.sh@353 -- # local d=2 00:37:35.164 10:48:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:35.165 10:48:36 rpc -- scripts/common.sh@355 -- # echo 2 00:37:35.165 10:48:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:37:35.165 10:48:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:35.165 10:48:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:35.165 10:48:36 rpc -- scripts/common.sh@368 -- # return 0 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:35.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.165 --rc genhtml_branch_coverage=1 00:37:35.165 --rc genhtml_function_coverage=1 00:37:35.165 --rc genhtml_legend=1 00:37:35.165 --rc geninfo_all_blocks=1 00:37:35.165 --rc geninfo_unexecuted_blocks=1 00:37:35.165 00:37:35.165 ' 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:35.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.165 --rc genhtml_branch_coverage=1 00:37:35.165 --rc genhtml_function_coverage=1 00:37:35.165 --rc genhtml_legend=1 00:37:35.165 --rc geninfo_all_blocks=1 00:37:35.165 --rc geninfo_unexecuted_blocks=1 00:37:35.165 00:37:35.165 ' 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:35.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.165 --rc genhtml_branch_coverage=1 00:37:35.165 --rc genhtml_function_coverage=1 00:37:35.165 --rc genhtml_legend=1 00:37:35.165 --rc geninfo_all_blocks=1 00:37:35.165 --rc geninfo_unexecuted_blocks=1 00:37:35.165 00:37:35.165 ' 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:35.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.165 --rc genhtml_branch_coverage=1 00:37:35.165 --rc genhtml_function_coverage=1 00:37:35.165 --rc genhtml_legend=1 00:37:35.165 --rc geninfo_all_blocks=1 00:37:35.165 --rc geninfo_unexecuted_blocks=1 00:37:35.165 00:37:35.165 ' 00:37:35.165 10:48:36 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:37:35.165 10:48:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2234063 00:37:35.165 10:48:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:37:35.165 10:48:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2234063 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 2234063 ']' 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:35.165 10:48:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:37:35.165 [2024-12-09 10:48:36.155294] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:37:35.165 [2024-12-09 10:48:36.155352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234063 ] 00:37:35.165 [2024-12-09 10:48:36.267090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.165 [2024-12-09 10:48:36.321982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:37:35.165 [2024-12-09 10:48:36.322031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2234063' to capture a snapshot of events at runtime. 00:37:35.165 [2024-12-09 10:48:36.322047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:35.165 [2024-12-09 10:48:36.322061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:35.165 [2024-12-09 10:48:36.322072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2234063 for offline analysis/debug. 00:37:35.165 [2024-12-09 10:48:36.322695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.425 10:48:36 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:35.425 10:48:36 rpc -- common/autotest_common.sh@868 -- # return 0 00:37:35.425 10:48:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:37:35.425 10:48:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:37:35.425 10:48:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:37:35.425 10:48:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:37:35.425 10:48:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:35.425 10:48:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.425 10:48:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:37:35.686 ************************************ 00:37:35.686 START TEST rpc_integrity 00:37:35.686 ************************************ 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:37:35.686 { 00:37:35.686 "name": "Malloc0", 00:37:35.686 "aliases": [ 00:37:35.686 "0e46b3df-d44e-43ef-9f14-f2ca02db4b15" 00:37:35.686 ], 00:37:35.686 "product_name": "Malloc disk", 00:37:35.686 "block_size": 512, 00:37:35.686 "num_blocks": 16384, 00:37:35.686 "uuid": "0e46b3df-d44e-43ef-9f14-f2ca02db4b15", 00:37:35.686 "assigned_rate_limits": { 00:37:35.686 "rw_ios_per_sec": 0, 00:37:35.686 "rw_mbytes_per_sec": 0, 00:37:35.686 "r_mbytes_per_sec": 0, 00:37:35.686 "w_mbytes_per_sec": 0 00:37:35.686 }, 00:37:35.686 "claimed": false, 00:37:35.686 "zoned": false, 00:37:35.686 "supported_io_types": { 00:37:35.686 "read": true, 00:37:35.686 "write": true, 00:37:35.686 "unmap": true, 00:37:35.686 "flush": true, 00:37:35.686 "reset": true, 00:37:35.686 "nvme_admin": false, 00:37:35.686 "nvme_io": false, 00:37:35.686 "nvme_io_md": false, 00:37:35.686 "write_zeroes": true, 00:37:35.686 "zcopy": true, 00:37:35.686 "get_zone_info": false, 00:37:35.686 "zone_management": false, 00:37:35.686 "zone_append": false, 00:37:35.686 "compare": false, 00:37:35.686 "compare_and_write": false, 00:37:35.686 "abort": true, 00:37:35.686 "seek_hole": false, 00:37:35.686 "seek_data": false, 00:37:35.686 "copy": true, 00:37:35.686 "nvme_iov_md": false 00:37:35.686 }, 00:37:35.686 "memory_domains": [ 00:37:35.686 { 00:37:35.686 "dma_device_id": "system", 00:37:35.686 "dma_device_type": 1 00:37:35.686 }, 00:37:35.686 { 00:37:35.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:35.686 "dma_device_type": 2 00:37:35.686 } 00:37:35.686 ], 00:37:35.686 "driver_specific": {} 00:37:35.686 } 00:37:35.686 ]' 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:35.686 [2024-12-09 10:48:36.760272] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:37:35.686 [2024-12-09 10:48:36.760313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:35.686 [2024-12-09 10:48:36.760333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eafd40 00:37:35.686 [2024-12-09 10:48:36.760346] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:35.686 [2024-12-09 10:48:36.761953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:35.686 [2024-12-09 10:48:36.761983] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:37:35.686 Passthru0 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:35.686 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.686 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:37:35.686 { 00:37:35.686 "name": "Malloc0", 00:37:35.686 "aliases": [ 00:37:35.686 "0e46b3df-d44e-43ef-9f14-f2ca02db4b15" 00:37:35.686 ], 00:37:35.686 "product_name": "Malloc disk", 00:37:35.686 "block_size": 512, 00:37:35.686 "num_blocks": 16384, 00:37:35.686 "uuid": "0e46b3df-d44e-43ef-9f14-f2ca02db4b15", 00:37:35.686 "assigned_rate_limits": { 00:37:35.686 "rw_ios_per_sec": 0, 00:37:35.686 "rw_mbytes_per_sec": 0, 00:37:35.686 "r_mbytes_per_sec": 0, 00:37:35.686 "w_mbytes_per_sec": 0 00:37:35.686 }, 00:37:35.686 "claimed": true, 00:37:35.686 "claim_type": "exclusive_write", 00:37:35.686 "zoned": false, 00:37:35.686 "supported_io_types": { 00:37:35.686 "read": true, 00:37:35.686 "write": true, 00:37:35.686 "unmap": true, 00:37:35.686 "flush": true, 00:37:35.686 "reset": true, 00:37:35.686 "nvme_admin": false, 00:37:35.686 "nvme_io": false, 00:37:35.686 "nvme_io_md": false, 00:37:35.686 "write_zeroes": true, 00:37:35.686 "zcopy": true, 00:37:35.686 "get_zone_info": false, 00:37:35.686 "zone_management": false, 00:37:35.686 "zone_append": false, 00:37:35.686 "compare": false, 00:37:35.686 "compare_and_write": false, 00:37:35.686 "abort": true, 00:37:35.686 "seek_hole": false, 00:37:35.686 "seek_data": false, 00:37:35.686 "copy": true, 00:37:35.686 "nvme_iov_md": false 00:37:35.686 }, 00:37:35.686 "memory_domains": [ 00:37:35.686 { 00:37:35.686 "dma_device_id": "system", 00:37:35.686 "dma_device_type": 1 00:37:35.686 }, 00:37:35.686 { 00:37:35.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:35.686 "dma_device_type": 2 00:37:35.686 } 00:37:35.686 ], 00:37:35.686 "driver_specific": {} 00:37:35.686 }, 00:37:35.686 { 00:37:35.686 "name": "Passthru0", 00:37:35.686 "aliases": [ 00:37:35.686 "1b78513d-8b34-5404-a7f5-75fbe2c2e663" 00:37:35.686 ], 00:37:35.686 "product_name": "passthru", 00:37:35.686 "block_size": 512, 00:37:35.686 "num_blocks": 16384, 00:37:35.687 "uuid": "1b78513d-8b34-5404-a7f5-75fbe2c2e663", 00:37:35.687 "assigned_rate_limits": { 00:37:35.687 "rw_ios_per_sec": 0, 00:37:35.687 "rw_mbytes_per_sec": 0, 00:37:35.687 "r_mbytes_per_sec": 0, 00:37:35.687 "w_mbytes_per_sec": 0 00:37:35.687 }, 00:37:35.687 "claimed": false, 00:37:35.687 "zoned": false, 00:37:35.687 "supported_io_types": { 00:37:35.687 "read": true, 00:37:35.687 "write": true, 00:37:35.687 "unmap": true, 00:37:35.687 "flush": true, 00:37:35.687 "reset": true, 00:37:35.687 "nvme_admin": false, 00:37:35.687 "nvme_io": false, 00:37:35.687 "nvme_io_md": false, 00:37:35.687 "write_zeroes": true, 00:37:35.687 "zcopy": true, 00:37:35.687 "get_zone_info": false, 00:37:35.687 "zone_management": false, 00:37:35.687 "zone_append": false, 00:37:35.687 "compare": false, 00:37:35.687 "compare_and_write": false, 00:37:35.687 "abort": true, 00:37:35.687 "seek_hole": false, 00:37:35.687 "seek_data": false, 00:37:35.687 "copy": true, 00:37:35.687 "nvme_iov_md": false 00:37:35.687 }, 00:37:35.687 "memory_domains": [ 00:37:35.687 { 00:37:35.687 "dma_device_id": "system", 00:37:35.687 "dma_device_type": 1 00:37:35.687 }, 00:37:35.687 { 00:37:35.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:35.687 "dma_device_type": 2 00:37:35.687 } 00:37:35.687 ], 00:37:35.687 "driver_specific": { 00:37:35.687 "passthru": { 00:37:35.687 "name": "Passthru0", 00:37:35.687 "base_bdev_name": "Malloc0" 00:37:35.687 } 00:37:35.687 } 00:37:35.687 } 00:37:35.687 ]' 00:37:35.687 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:37:35.687 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:37:35.687 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:37:35.687 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.687 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:35.947 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.947 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:37:35.947 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.947 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:35.947 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.947 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:37:35.947 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.947 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:35.947 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.947 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:37:35.947 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:37:35.947 10:48:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:37:35.947 00:37:35.947 real 0m0.288s 00:37:35.947 user 0m0.168s 00:37:35.947 sys 0m0.054s 00:37:35.947 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.947 10:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:35.947 ************************************ 00:37:35.947 END TEST rpc_integrity 00:37:35.947 ************************************ 00:37:35.947 10:48:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:37:35.947 10:48:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:35.947 10:48:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.947 10:48:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:37:35.947 ************************************ 00:37:35.947 START TEST rpc_plugins 00:37:35.947 ************************************ 00:37:35.947 10:48:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:37:35.947 10:48:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:37:35.947 10:48:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.947 10:48:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:37:35.947 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.947 10:48:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:37:35.947 10:48:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:37:35.947 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.947 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:37:35.947 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.947 10:48:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:37:35.947 { 00:37:35.947 "name": "Malloc1", 00:37:35.947 "aliases": [ 00:37:35.947 "2fb0d573-23f0-4ca3-aa46-9ed39baa767a" 00:37:35.947 ], 00:37:35.947 "product_name": "Malloc disk", 00:37:35.947 "block_size": 4096, 00:37:35.947 "num_blocks": 256, 00:37:35.947 "uuid": "2fb0d573-23f0-4ca3-aa46-9ed39baa767a", 00:37:35.947 "assigned_rate_limits": { 00:37:35.947 "rw_ios_per_sec": 0, 00:37:35.947 "rw_mbytes_per_sec": 0, 00:37:35.947 "r_mbytes_per_sec": 0, 00:37:35.947 "w_mbytes_per_sec": 0 00:37:35.947 }, 00:37:35.947 "claimed": false, 00:37:35.947 "zoned": false, 00:37:35.947 "supported_io_types": { 00:37:35.947 "read": true, 00:37:35.947 "write": true, 00:37:35.947 "unmap": true, 00:37:35.947 "flush": true, 00:37:35.947 "reset": true, 00:37:35.947 "nvme_admin": false, 00:37:35.947 "nvme_io": false, 00:37:35.947 "nvme_io_md": false, 00:37:35.947 "write_zeroes": true, 00:37:35.947 "zcopy": true, 00:37:35.947 "get_zone_info": false, 00:37:35.947 "zone_management": false, 00:37:35.947 "zone_append": false, 00:37:35.947 "compare": false, 00:37:35.947 "compare_and_write": false, 00:37:35.947 "abort": true, 00:37:35.947 "seek_hole": false, 00:37:35.947 "seek_data": false, 00:37:35.947 "copy": true, 00:37:35.947 "nvme_iov_md": false 00:37:35.947 }, 00:37:35.947 "memory_domains": [ 00:37:35.947 { 00:37:35.947 "dma_device_id": "system", 00:37:35.947 "dma_device_type": 1 00:37:35.947 }, 00:37:35.947 { 00:37:35.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:35.947 "dma_device_type": 2 00:37:35.947 } 00:37:35.947 ], 00:37:35.947 "driver_specific": {} 00:37:35.947 } 00:37:35.947 ]' 00:37:35.947 10:48:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:37:35.947 10:48:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:37:35.947 10:48:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:37:35.947 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.947 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:37:35.947 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.947 10:48:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:37:35.947 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.947 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:37:35.947 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.947 10:48:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:37:35.947 10:48:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:37:36.207 10:48:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:37:36.207 00:37:36.207 real 0m0.162s 00:37:36.207 user 0m0.104s 00:37:36.207 sys 0m0.023s 00:37:36.207 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.207 10:48:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:37:36.207 ************************************ 00:37:36.207 END TEST rpc_plugins 00:37:36.207 ************************************ 00:37:36.207 10:48:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:37:36.207 10:48:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:36.207 10:48:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.207 10:48:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:37:36.207 ************************************ 00:37:36.207 START TEST rpc_trace_cmd_test 00:37:36.207 ************************************ 00:37:36.207 10:48:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:37:36.207 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:37:36.207 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:37:36.207 10:48:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.207 10:48:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.207 10:48:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.207 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:37:36.207 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2234063", 00:37:36.207 "tpoint_group_mask": "0x8", 00:37:36.207 "iscsi_conn": { 00:37:36.207 "mask": "0x2", 00:37:36.207 "tpoint_mask": "0x0" 00:37:36.207 }, 00:37:36.207 "scsi": { 00:37:36.207 "mask": "0x4", 00:37:36.207 "tpoint_mask": "0x0" 00:37:36.207 }, 00:37:36.207 "bdev": { 00:37:36.207 "mask": "0x8", 00:37:36.207 "tpoint_mask": "0xffffffffffffffff" 00:37:36.207 }, 00:37:36.207 "nvmf_rdma": { 00:37:36.207 "mask": "0x10", 00:37:36.207 "tpoint_mask": "0x0" 00:37:36.207 }, 00:37:36.207 "nvmf_tcp": { 00:37:36.207 "mask": "0x20", 00:37:36.207 "tpoint_mask": "0x0" 00:37:36.207 }, 00:37:36.207 "ftl": { 00:37:36.207 "mask": "0x40", 00:37:36.207 "tpoint_mask": "0x0" 00:37:36.207 }, 00:37:36.207 "blobfs": { 00:37:36.207 "mask": "0x80", 00:37:36.207 "tpoint_mask": "0x0" 00:37:36.207 }, 00:37:36.207 "dsa": { 00:37:36.207 "mask": "0x200", 00:37:36.207 "tpoint_mask": "0x0" 00:37:36.207 }, 00:37:36.207 "thread": { 00:37:36.207 "mask": "0x400", 00:37:36.207 "tpoint_mask": "0x0" 00:37:36.207 }, 00:37:36.207 "nvme_pcie": { 00:37:36.207 "mask": "0x800", 00:37:36.207 "tpoint_mask": "0x0" 00:37:36.207 }, 00:37:36.207 "iaa": { 00:37:36.207 "mask": "0x1000", 00:37:36.207 "tpoint_mask": "0x0" 00:37:36.207 }, 00:37:36.208 "nvme_tcp": { 00:37:36.208 "mask": "0x2000", 00:37:36.208 "tpoint_mask": "0x0" 00:37:36.208 }, 00:37:36.208 "bdev_nvme": { 00:37:36.208 "mask": "0x4000", 00:37:36.208 "tpoint_mask": "0x0" 00:37:36.208 }, 00:37:36.208 "sock": { 00:37:36.208 "mask": "0x8000", 00:37:36.208 "tpoint_mask": "0x0" 00:37:36.208 }, 00:37:36.208 "blob": { 00:37:36.208 "mask": "0x10000", 00:37:36.208 "tpoint_mask": "0x0" 00:37:36.208 }, 00:37:36.208 "bdev_raid": { 00:37:36.208 "mask": "0x20000", 00:37:36.208 "tpoint_mask": "0x0" 00:37:36.208 }, 00:37:36.208 "scheduler": { 00:37:36.208 "mask": "0x40000", 00:37:36.208 "tpoint_mask": "0x0" 00:37:36.208 } 00:37:36.208 }' 00:37:36.208 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:37:36.208 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:37:36.208 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:37:36.208 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:37:36.208 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:37:36.468 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:37:36.468 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:37:36.468 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:37:36.468 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:37:36.468 10:48:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:37:36.468 00:37:36.468 real 0m0.269s 00:37:36.468 user 0m0.220s 00:37:36.468 sys 0m0.040s 00:37:36.468 10:48:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.468 10:48:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.468 ************************************ 00:37:36.468 END TEST rpc_trace_cmd_test 00:37:36.468 ************************************ 00:37:36.468 10:48:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:37:36.468 10:48:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:37:36.468 10:48:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:37:36.468 10:48:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:36.468 10:48:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.468 10:48:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:37:36.468 ************************************ 00:37:36.468 START TEST rpc_daemon_integrity 00:37:36.468 ************************************ 00:37:36.468 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:37:36.468 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:37:36.468 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.468 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:36.468 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.468 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:37:36.468 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.728 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:37:36.728 { 00:37:36.728 "name": "Malloc2", 00:37:36.728 "aliases": [ 00:37:36.728 "f58ad965-eb85-495d-823d-2dc841c7e68d" 00:37:36.728 ], 00:37:36.728 "product_name": "Malloc disk", 00:37:36.728 "block_size": 512, 00:37:36.728 "num_blocks": 16384, 00:37:36.728 "uuid": "f58ad965-eb85-495d-823d-2dc841c7e68d", 00:37:36.728 "assigned_rate_limits": { 00:37:36.728 "rw_ios_per_sec": 0, 00:37:36.728 "rw_mbytes_per_sec": 0, 00:37:36.728 "r_mbytes_per_sec": 0, 00:37:36.728 "w_mbytes_per_sec": 0 00:37:36.728 }, 00:37:36.728 "claimed": false, 00:37:36.728 "zoned": false, 00:37:36.728 "supported_io_types": { 00:37:36.728 "read": true, 00:37:36.728 "write": true, 00:37:36.728 "unmap": true, 00:37:36.728 "flush": true, 00:37:36.728 "reset": true, 00:37:36.728 "nvme_admin": false, 00:37:36.728 "nvme_io": false, 00:37:36.728 "nvme_io_md": false, 00:37:36.728 "write_zeroes": true, 00:37:36.728 "zcopy": true, 00:37:36.728 "get_zone_info": false, 00:37:36.728 "zone_management": false, 00:37:36.728 "zone_append": false, 00:37:36.728 "compare": false, 00:37:36.728 "compare_and_write": false, 00:37:36.728 "abort": true, 00:37:36.728 "seek_hole": false, 00:37:36.729 "seek_data": false, 00:37:36.729 "copy": true, 00:37:36.729 "nvme_iov_md": false 00:37:36.729 }, 00:37:36.729 "memory_domains": [ 00:37:36.729 { 00:37:36.729 "dma_device_id": "system", 00:37:36.729 "dma_device_type": 1 00:37:36.729 }, 00:37:36.729 { 00:37:36.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:36.729 "dma_device_type": 2 00:37:36.729 } 00:37:36.729 ], 00:37:36.729 "driver_specific": {} 00:37:36.729 } 00:37:36.729 ]' 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:36.729 [2024-12-09 10:48:37.735427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:37:36.729 [2024-12-09 10:48:37.735467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:36.729 [2024-12-09 10:48:37.735487] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fe0a10 00:37:36.729 [2024-12-09 10:48:37.735500] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:36.729 [2024-12-09 10:48:37.736886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:36.729 [2024-12-09 10:48:37.736917] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:37:36.729 Passthru0 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:37:36.729 { 00:37:36.729 "name": "Malloc2", 00:37:36.729 "aliases": [ 00:37:36.729 "f58ad965-eb85-495d-823d-2dc841c7e68d" 00:37:36.729 ], 00:37:36.729 "product_name": "Malloc disk", 00:37:36.729 "block_size": 512, 00:37:36.729 "num_blocks": 16384, 00:37:36.729 "uuid": "f58ad965-eb85-495d-823d-2dc841c7e68d", 00:37:36.729 "assigned_rate_limits": { 00:37:36.729 "rw_ios_per_sec": 0, 00:37:36.729 "rw_mbytes_per_sec": 0, 00:37:36.729 "r_mbytes_per_sec": 0, 00:37:36.729 "w_mbytes_per_sec": 0 00:37:36.729 }, 00:37:36.729 "claimed": true, 00:37:36.729 "claim_type": "exclusive_write", 00:37:36.729 "zoned": false, 00:37:36.729 "supported_io_types": { 00:37:36.729 "read": true, 00:37:36.729 "write": true, 00:37:36.729 "unmap": true, 00:37:36.729 "flush": true, 00:37:36.729 "reset": true, 00:37:36.729 "nvme_admin": false, 00:37:36.729 "nvme_io": false, 00:37:36.729 "nvme_io_md": false, 00:37:36.729 "write_zeroes": true, 00:37:36.729 "zcopy": true, 00:37:36.729 "get_zone_info": false, 00:37:36.729 "zone_management": false, 00:37:36.729 "zone_append": false, 00:37:36.729 "compare": false, 00:37:36.729 "compare_and_write": false, 00:37:36.729 "abort": true, 00:37:36.729 "seek_hole": false, 00:37:36.729 "seek_data": false, 00:37:36.729 "copy": true, 00:37:36.729 "nvme_iov_md": false 00:37:36.729 }, 00:37:36.729 "memory_domains": [ 00:37:36.729 { 00:37:36.729 "dma_device_id": "system", 00:37:36.729 "dma_device_type": 1 00:37:36.729 }, 00:37:36.729 { 00:37:36.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:36.729 "dma_device_type": 2 00:37:36.729 } 00:37:36.729 ], 00:37:36.729 "driver_specific": {} 00:37:36.729 }, 00:37:36.729 { 00:37:36.729 "name": "Passthru0", 00:37:36.729 "aliases": [ 00:37:36.729 "680bf12e-4a31-53d6-88d6-c31fd22668e2" 00:37:36.729 ], 00:37:36.729 "product_name": "passthru", 00:37:36.729 "block_size": 512, 00:37:36.729 "num_blocks": 16384, 00:37:36.729 "uuid": "680bf12e-4a31-53d6-88d6-c31fd22668e2", 00:37:36.729 "assigned_rate_limits": { 00:37:36.729 "rw_ios_per_sec": 0, 00:37:36.729 "rw_mbytes_per_sec": 0, 00:37:36.729 "r_mbytes_per_sec": 0, 00:37:36.729 "w_mbytes_per_sec": 0 00:37:36.729 }, 00:37:36.729 "claimed": false, 00:37:36.729 "zoned": false, 00:37:36.729 "supported_io_types": { 00:37:36.729 "read": true, 00:37:36.729 "write": true, 00:37:36.729 "unmap": true, 00:37:36.729 "flush": true, 00:37:36.729 "reset": true, 00:37:36.729 "nvme_admin": false, 00:37:36.729 "nvme_io": false, 00:37:36.729 "nvme_io_md": false, 00:37:36.729 "write_zeroes": true, 00:37:36.729 "zcopy": true, 00:37:36.729 "get_zone_info": false, 00:37:36.729 "zone_management": false, 00:37:36.729 "zone_append": false, 00:37:36.729 "compare": false, 00:37:36.729 "compare_and_write": false, 00:37:36.729 "abort": true, 00:37:36.729 "seek_hole": false, 00:37:36.729 "seek_data": false, 00:37:36.729 "copy": true, 00:37:36.729 "nvme_iov_md": false 00:37:36.729 }, 00:37:36.729 "memory_domains": [ 00:37:36.729 { 00:37:36.729 "dma_device_id": "system", 00:37:36.729 "dma_device_type": 1 00:37:36.729 }, 00:37:36.729 { 00:37:36.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:36.729 "dma_device_type": 2 00:37:36.729 } 00:37:36.729 ], 00:37:36.729 "driver_specific": { 00:37:36.729 "passthru": { 00:37:36.729 "name": "Passthru0", 00:37:36.729 "base_bdev_name": "Malloc2" 00:37:36.729 } 00:37:36.729 } 00:37:36.729 } 00:37:36.729 ]' 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:37:36.729 00:37:36.729 real 0m0.312s 00:37:36.729 user 0m0.196s 00:37:36.729 sys 0m0.057s 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.729 10:48:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:37:36.729 ************************************ 00:37:36.729 END TEST rpc_daemon_integrity 00:37:36.729 ************************************ 00:37:36.989 10:48:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:36.989 10:48:37 rpc -- rpc/rpc.sh@84 -- # killprocess 2234063 00:37:36.989 10:48:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 2234063 ']' 00:37:36.989 10:48:37 rpc -- common/autotest_common.sh@958 -- # kill -0 2234063 00:37:36.989 10:48:37 rpc -- common/autotest_common.sh@959 -- # uname 00:37:36.989 10:48:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:36.989 10:48:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2234063 00:37:36.989 10:48:38 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:36.989 10:48:38 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:36.989 10:48:38 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2234063' 00:37:36.989 killing process with pid 2234063 00:37:36.989 10:48:38 rpc -- common/autotest_common.sh@973 -- # kill 2234063 00:37:36.989 10:48:38 rpc -- common/autotest_common.sh@978 -- # wait 2234063 00:37:37.560 00:37:37.560 real 0m2.525s 00:37:37.560 user 0m3.225s 00:37:37.560 sys 0m0.867s 00:37:37.560 10:48:38 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:37.560 10:48:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:37:37.560 ************************************ 00:37:37.560 END TEST rpc 00:37:37.560 ************************************ 00:37:37.560 10:48:38 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:37:37.560 10:48:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:37.560 10:48:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:37.560 10:48:38 -- common/autotest_common.sh@10 -- # set +x 00:37:37.560 ************************************ 00:37:37.560 START TEST skip_rpc 00:37:37.560 ************************************ 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:37:37.560 * Looking for test storage... 00:37:37.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@345 -- # : 1 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:37.560 10:48:38 skip_rpc -- scripts/common.sh@368 -- # return 0 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.560 --rc genhtml_branch_coverage=1 00:37:37.560 --rc genhtml_function_coverage=1 00:37:37.560 --rc genhtml_legend=1 00:37:37.560 --rc geninfo_all_blocks=1 00:37:37.560 --rc geninfo_unexecuted_blocks=1 00:37:37.560 00:37:37.560 ' 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.560 --rc genhtml_branch_coverage=1 00:37:37.560 --rc genhtml_function_coverage=1 00:37:37.560 --rc genhtml_legend=1 00:37:37.560 --rc geninfo_all_blocks=1 00:37:37.560 --rc geninfo_unexecuted_blocks=1 00:37:37.560 00:37:37.560 ' 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.560 --rc genhtml_branch_coverage=1 00:37:37.560 --rc genhtml_function_coverage=1 00:37:37.560 --rc genhtml_legend=1 00:37:37.560 --rc geninfo_all_blocks=1 00:37:37.560 --rc geninfo_unexecuted_blocks=1 00:37:37.560 00:37:37.560 ' 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.560 --rc genhtml_branch_coverage=1 00:37:37.560 --rc genhtml_function_coverage=1 00:37:37.560 --rc genhtml_legend=1 00:37:37.560 --rc geninfo_all_blocks=1 00:37:37.560 --rc geninfo_unexecuted_blocks=1 00:37:37.560 00:37:37.560 ' 00:37:37.560 10:48:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:37:37.560 10:48:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:37:37.560 10:48:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:37.560 10:48:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:37.820 ************************************ 00:37:37.820 START TEST skip_rpc 00:37:37.820 ************************************ 00:37:37.820 10:48:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:37:37.820 10:48:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2234599 00:37:37.820 10:48:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:37:37.820 10:48:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:37:37.820 10:48:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:37:37.820 [2024-12-09 10:48:38.836931] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:37:37.820 [2024-12-09 10:48:38.837000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234599 ] 00:37:37.820 [2024-12-09 10:48:38.962652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.081 [2024-12-09 10:48:39.019006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2234599 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2234599 ']' 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2234599 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2234599 00:37:43.368 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:43.369 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:43.369 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2234599' 00:37:43.369 killing process with pid 2234599 00:37:43.369 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2234599 00:37:43.369 10:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2234599 00:37:43.369 00:37:43.369 real 0m5.511s 00:37:43.369 user 0m5.193s 00:37:43.369 sys 0m0.368s 00:37:43.369 10:48:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:43.369 10:48:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:43.369 ************************************ 00:37:43.369 END TEST skip_rpc 00:37:43.369 ************************************ 00:37:43.369 10:48:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:37:43.369 10:48:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:43.369 10:48:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:43.369 10:48:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:43.369 ************************************ 00:37:43.369 START TEST skip_rpc_with_json 00:37:43.369 ************************************ 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2235345 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2235345 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2235345 ']' 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:43.369 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:37:43.369 [2024-12-09 10:48:44.408609] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:37:43.369 [2024-12-09 10:48:44.408669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235345 ] 00:37:43.369 [2024-12-09 10:48:44.518771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.629 [2024-12-09 10:48:44.575462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 [2024-12-09 10:48:44.841001] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:37:43.890 request: 00:37:43.890 { 00:37:43.890 "trtype": "tcp", 00:37:43.890 "method": "nvmf_get_transports", 00:37:43.890 "req_id": 1 00:37:43.890 } 00:37:43.890 Got JSON-RPC error response 00:37:43.890 response: 00:37:43.890 { 00:37:43.890 "code": -19, 00:37:43.890 "message": "No such device" 00:37:43.890 } 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 [2024-12-09 10:48:44.853145] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.890 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.890 10:48:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:37:43.890 { 00:37:43.890 "subsystems": [ 00:37:43.890 { 00:37:43.890 "subsystem": "fsdev", 00:37:43.890 "config": [ 00:37:43.890 { 00:37:43.890 "method": "fsdev_set_opts", 00:37:43.890 "params": { 00:37:43.890 "fsdev_io_pool_size": 65535, 00:37:43.890 "fsdev_io_cache_size": 256 00:37:43.890 } 00:37:43.890 } 00:37:43.890 ] 00:37:43.890 }, 00:37:43.890 { 00:37:43.890 "subsystem": "vfio_user_target", 00:37:43.890 "config": null 00:37:43.890 }, 00:37:43.890 { 00:37:43.890 "subsystem": "keyring", 00:37:43.890 "config": [] 00:37:43.890 }, 00:37:43.890 { 00:37:43.890 "subsystem": "iobuf", 00:37:43.890 "config": [ 00:37:43.890 { 00:37:43.890 "method": "iobuf_set_options", 00:37:43.890 "params": { 00:37:43.890 "small_pool_count": 8192, 00:37:43.890 "large_pool_count": 1024, 00:37:43.890 "small_bufsize": 8192, 00:37:43.890 "large_bufsize": 135168, 00:37:43.890 "enable_numa": false 00:37:43.890 } 00:37:43.890 } 00:37:43.890 ] 00:37:43.890 }, 00:37:43.890 { 00:37:43.890 "subsystem": "sock", 00:37:43.890 "config": [ 00:37:43.890 { 00:37:43.890 "method": "sock_set_default_impl", 00:37:43.890 "params": { 00:37:43.890 "impl_name": "posix" 00:37:43.890 } 00:37:43.890 }, 00:37:43.890 { 00:37:43.890 "method": "sock_impl_set_options", 00:37:43.890 "params": { 00:37:43.890 "impl_name": "ssl", 00:37:43.890 "recv_buf_size": 4096, 00:37:43.890 "send_buf_size": 4096, 00:37:43.890 "enable_recv_pipe": true, 00:37:43.890 "enable_quickack": false, 00:37:43.890 "enable_placement_id": 0, 00:37:43.890 "enable_zerocopy_send_server": true, 00:37:43.890 "enable_zerocopy_send_client": false, 00:37:43.890 "zerocopy_threshold": 0, 00:37:43.890 "tls_version": 0, 00:37:43.890 "enable_ktls": false 00:37:43.890 } 00:37:43.890 }, 00:37:43.890 { 00:37:43.890 "method": "sock_impl_set_options", 00:37:43.890 "params": { 00:37:43.890 "impl_name": "posix", 00:37:43.890 "recv_buf_size": 2097152, 00:37:43.890 "send_buf_size": 2097152, 00:37:43.890 "enable_recv_pipe": true, 00:37:43.890 "enable_quickack": false, 00:37:43.890 "enable_placement_id": 0, 00:37:43.890 "enable_zerocopy_send_server": true, 00:37:43.890 "enable_zerocopy_send_client": false, 00:37:43.890 "zerocopy_threshold": 0, 00:37:43.890 "tls_version": 0, 00:37:43.890 "enable_ktls": false 00:37:43.890 } 00:37:43.890 } 00:37:43.890 ] 00:37:43.890 }, 00:37:43.890 { 00:37:43.890 "subsystem": "vmd", 00:37:43.890 "config": [] 00:37:43.890 }, 00:37:43.890 { 00:37:43.890 "subsystem": "accel", 00:37:43.890 "config": [ 00:37:43.890 { 00:37:43.890 "method": "accel_set_options", 00:37:43.890 "params": { 00:37:43.890 "small_cache_size": 128, 00:37:43.890 "large_cache_size": 16, 00:37:43.890 "task_count": 2048, 00:37:43.890 "sequence_count": 2048, 00:37:43.890 "buf_count": 2048 00:37:43.890 } 00:37:43.890 } 00:37:43.890 ] 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "subsystem": "bdev", 00:37:43.891 "config": [ 00:37:43.891 { 00:37:43.891 "method": "bdev_set_options", 00:37:43.891 "params": { 00:37:43.891 "bdev_io_pool_size": 65535, 00:37:43.891 "bdev_io_cache_size": 256, 00:37:43.891 "bdev_auto_examine": true, 00:37:43.891 "iobuf_small_cache_size": 128, 00:37:43.891 "iobuf_large_cache_size": 16 00:37:43.891 } 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "method": "bdev_raid_set_options", 00:37:43.891 "params": { 00:37:43.891 "process_window_size_kb": 1024, 00:37:43.891 "process_max_bandwidth_mb_sec": 0 00:37:43.891 } 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "method": "bdev_iscsi_set_options", 00:37:43.891 "params": { 00:37:43.891 "timeout_sec": 30 00:37:43.891 } 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "method": "bdev_nvme_set_options", 00:37:43.891 "params": { 00:37:43.891 "action_on_timeout": "none", 00:37:43.891 "timeout_us": 0, 00:37:43.891 "timeout_admin_us": 0, 00:37:43.891 "keep_alive_timeout_ms": 10000, 00:37:43.891 "arbitration_burst": 0, 00:37:43.891 "low_priority_weight": 0, 00:37:43.891 "medium_priority_weight": 0, 00:37:43.891 "high_priority_weight": 0, 00:37:43.891 "nvme_adminq_poll_period_us": 10000, 00:37:43.891 "nvme_ioq_poll_period_us": 0, 00:37:43.891 "io_queue_requests": 0, 00:37:43.891 "delay_cmd_submit": true, 00:37:43.891 "transport_retry_count": 4, 00:37:43.891 "bdev_retry_count": 3, 00:37:43.891 "transport_ack_timeout": 0, 00:37:43.891 "ctrlr_loss_timeout_sec": 0, 00:37:43.891 "reconnect_delay_sec": 0, 00:37:43.891 "fast_io_fail_timeout_sec": 0, 00:37:43.891 "disable_auto_failback": false, 00:37:43.891 "generate_uuids": false, 00:37:43.891 "transport_tos": 0, 00:37:43.891 "nvme_error_stat": false, 00:37:43.891 "rdma_srq_size": 0, 00:37:43.891 "io_path_stat": false, 00:37:43.891 "allow_accel_sequence": false, 00:37:43.891 "rdma_max_cq_size": 0, 00:37:43.891 "rdma_cm_event_timeout_ms": 0, 00:37:43.891 "dhchap_digests": [ 00:37:43.891 "sha256", 00:37:43.891 "sha384", 00:37:43.891 "sha512" 00:37:43.891 ], 00:37:43.891 "dhchap_dhgroups": [ 00:37:43.891 "null", 00:37:43.891 "ffdhe2048", 00:37:43.891 "ffdhe3072", 00:37:43.891 "ffdhe4096", 00:37:43.891 "ffdhe6144", 00:37:43.891 "ffdhe8192" 00:37:43.891 ] 00:37:43.891 } 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "method": "bdev_nvme_set_hotplug", 00:37:43.891 "params": { 00:37:43.891 "period_us": 100000, 00:37:43.891 "enable": false 00:37:43.891 } 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "method": "bdev_wait_for_examine" 00:37:43.891 } 00:37:43.891 ] 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "subsystem": "scsi", 00:37:43.891 "config": null 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "subsystem": "scheduler", 00:37:43.891 "config": [ 00:37:43.891 { 00:37:43.891 "method": "framework_set_scheduler", 00:37:43.891 "params": { 00:37:43.891 "name": "static" 00:37:43.891 } 00:37:43.891 } 00:37:43.891 ] 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "subsystem": "vhost_scsi", 00:37:43.891 "config": [] 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "subsystem": "vhost_blk", 00:37:43.891 "config": [] 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "subsystem": "ublk", 00:37:43.891 "config": [] 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "subsystem": "nbd", 00:37:43.891 "config": [] 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "subsystem": "nvmf", 00:37:43.891 "config": [ 00:37:43.891 { 00:37:43.891 "method": "nvmf_set_config", 00:37:43.891 "params": { 00:37:43.891 "discovery_filter": "match_any", 00:37:43.891 "admin_cmd_passthru": { 00:37:43.891 "identify_ctrlr": false 00:37:43.891 }, 00:37:43.891 "dhchap_digests": [ 00:37:43.891 "sha256", 00:37:43.891 "sha384", 00:37:43.891 "sha512" 00:37:43.891 ], 00:37:43.891 "dhchap_dhgroups": [ 00:37:43.891 "null", 00:37:43.891 "ffdhe2048", 00:37:43.891 "ffdhe3072", 00:37:43.891 "ffdhe4096", 00:37:43.891 "ffdhe6144", 00:37:43.891 "ffdhe8192" 00:37:43.891 ] 00:37:43.891 } 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "method": "nvmf_set_max_subsystems", 00:37:43.891 "params": { 00:37:43.891 "max_subsystems": 1024 00:37:43.891 } 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "method": "nvmf_set_crdt", 00:37:43.891 "params": { 00:37:43.891 "crdt1": 0, 00:37:43.891 "crdt2": 0, 00:37:43.891 "crdt3": 0 00:37:43.891 } 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "method": "nvmf_create_transport", 00:37:43.891 "params": { 00:37:43.891 "trtype": "TCP", 00:37:43.891 "max_queue_depth": 128, 00:37:43.891 "max_io_qpairs_per_ctrlr": 127, 00:37:43.891 "in_capsule_data_size": 4096, 00:37:43.891 "max_io_size": 131072, 00:37:43.891 "io_unit_size": 131072, 00:37:43.891 "max_aq_depth": 128, 00:37:43.891 "num_shared_buffers": 511, 00:37:43.891 "buf_cache_size": 4294967295, 00:37:43.891 "dif_insert_or_strip": false, 00:37:43.891 "zcopy": false, 00:37:43.891 "c2h_success": true, 00:37:43.891 "sock_priority": 0, 00:37:43.891 "abort_timeout_sec": 1, 00:37:43.891 "ack_timeout": 0, 00:37:43.891 "data_wr_pool_size": 0 00:37:43.891 } 00:37:43.891 } 00:37:43.891 ] 00:37:43.891 }, 00:37:43.891 { 00:37:43.891 "subsystem": "iscsi", 00:37:43.891 "config": [ 00:37:43.891 { 00:37:43.891 "method": "iscsi_set_options", 00:37:43.891 "params": { 00:37:43.891 "node_base": "iqn.2016-06.io.spdk", 00:37:43.891 "max_sessions": 128, 00:37:43.891 "max_connections_per_session": 2, 00:37:43.891 "max_queue_depth": 64, 00:37:43.891 "default_time2wait": 2, 00:37:43.891 "default_time2retain": 20, 00:37:43.891 "first_burst_length": 8192, 00:37:43.891 "immediate_data": true, 00:37:43.891 "allow_duplicated_isid": false, 00:37:43.891 "error_recovery_level": 0, 00:37:43.891 "nop_timeout": 60, 00:37:43.891 "nop_in_interval": 30, 00:37:43.891 "disable_chap": false, 00:37:43.891 "require_chap": false, 00:37:43.891 "mutual_chap": false, 00:37:43.891 "chap_group": 0, 00:37:43.891 "max_large_datain_per_connection": 64, 00:37:43.891 "max_r2t_per_connection": 4, 00:37:43.891 "pdu_pool_size": 36864, 00:37:43.891 "immediate_data_pool_size": 16384, 00:37:43.891 "data_out_pool_size": 2048 00:37:43.891 } 00:37:43.891 } 00:37:43.891 ] 00:37:43.891 } 00:37:43.891 ] 00:37:43.891 } 00:37:43.891 10:48:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:37:43.891 10:48:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2235345 00:37:43.891 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2235345 ']' 00:37:43.891 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2235345 00:37:43.891 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:37:43.891 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:43.891 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2235345 00:37:44.156 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:44.156 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:44.156 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2235345' 00:37:44.156 killing process with pid 2235345 00:37:44.156 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2235345 00:37:44.156 10:48:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2235345 00:37:44.417 10:48:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2235527 00:37:44.417 10:48:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:37:44.417 10:48:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2235527 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2235527 ']' 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2235527 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2235527 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2235527' 00:37:49.712 killing process with pid 2235527 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2235527 00:37:49.712 10:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2235527 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:37:49.974 00:37:49.974 real 0m6.685s 00:37:49.974 user 0m6.333s 00:37:49.974 sys 0m0.791s 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:37:49.974 ************************************ 00:37:49.974 END TEST skip_rpc_with_json 00:37:49.974 ************************************ 00:37:49.974 10:48:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:37:49.974 10:48:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:49.974 10:48:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:49.974 10:48:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:49.974 ************************************ 00:37:49.974 START TEST skip_rpc_with_delay 00:37:49.974 ************************************ 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:37:49.974 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:37:50.234 [2024-12-09 10:48:51.207703] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:37:50.234 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:37:50.234 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:50.234 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:50.234 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:50.234 00:37:50.234 real 0m0.094s 00:37:50.234 user 0m0.056s 00:37:50.234 sys 0m0.037s 00:37:50.234 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:50.234 10:48:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:37:50.234 ************************************ 00:37:50.234 END TEST skip_rpc_with_delay 00:37:50.234 ************************************ 00:37:50.234 10:48:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:37:50.234 10:48:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:37:50.234 10:48:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:37:50.234 10:48:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:50.234 10:48:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:50.234 10:48:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:50.234 ************************************ 00:37:50.234 START TEST exit_on_failed_rpc_init 00:37:50.234 ************************************ 00:37:50.234 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:37:50.234 10:48:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2236292 00:37:50.234 10:48:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2236292 00:37:50.234 10:48:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:37:50.234 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2236292 ']' 00:37:50.234 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.234 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:50.235 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.235 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:50.235 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:37:50.235 [2024-12-09 10:48:51.388118] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:37:50.235 [2024-12-09 10:48:51.388191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236292 ] 00:37:50.495 [2024-12-09 10:48:51.514468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.495 [2024-12-09 10:48:51.569126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:37:50.755 10:48:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:37:50.755 [2024-12-09 10:48:51.876802] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:37:50.755 [2024-12-09 10:48:51.876858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236347 ] 00:37:51.016 [2024-12-09 10:48:51.956174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.016 [2024-12-09 10:48:52.000592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.016 [2024-12-09 10:48:52.000662] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:51.016 [2024-12-09 10:48:52.000676] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:51.016 [2024-12-09 10:48:52.000684] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2236292 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2236292 ']' 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2236292 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2236292 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2236292' 00:37:51.016 killing process with pid 2236292 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2236292 00:37:51.016 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2236292 00:37:51.587 00:37:51.588 real 0m1.257s 00:37:51.588 user 0m1.337s 00:37:51.588 sys 0m0.510s 00:37:51.588 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.588 10:48:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:37:51.588 ************************************ 00:37:51.588 END TEST exit_on_failed_rpc_init 00:37:51.588 ************************************ 00:37:51.588 10:48:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:37:51.588 00:37:51.588 real 0m14.112s 00:37:51.588 user 0m13.191s 00:37:51.588 sys 0m2.039s 00:37:51.588 10:48:52 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.588 10:48:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:51.588 ************************************ 00:37:51.588 END TEST skip_rpc 00:37:51.588 ************************************ 00:37:51.588 10:48:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:37:51.588 10:48:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:51.588 10:48:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.588 10:48:52 -- common/autotest_common.sh@10 -- # set +x 00:37:51.588 ************************************ 00:37:51.588 START TEST rpc_client 00:37:51.588 ************************************ 00:37:51.588 10:48:52 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:37:51.849 * Looking for test storage... 00:37:51.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.849 10:48:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:51.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.849 --rc genhtml_branch_coverage=1 00:37:51.849 --rc genhtml_function_coverage=1 00:37:51.849 --rc genhtml_legend=1 00:37:51.849 --rc geninfo_all_blocks=1 00:37:51.849 --rc geninfo_unexecuted_blocks=1 00:37:51.849 00:37:51.849 ' 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:51.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.849 --rc genhtml_branch_coverage=1 00:37:51.849 --rc genhtml_function_coverage=1 00:37:51.849 --rc genhtml_legend=1 00:37:51.849 --rc geninfo_all_blocks=1 00:37:51.849 --rc geninfo_unexecuted_blocks=1 00:37:51.849 00:37:51.849 ' 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:51.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.849 --rc genhtml_branch_coverage=1 00:37:51.849 --rc genhtml_function_coverage=1 00:37:51.849 --rc genhtml_legend=1 00:37:51.849 --rc geninfo_all_blocks=1 00:37:51.849 --rc geninfo_unexecuted_blocks=1 00:37:51.849 00:37:51.849 ' 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:51.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.849 --rc genhtml_branch_coverage=1 00:37:51.849 --rc genhtml_function_coverage=1 00:37:51.849 --rc genhtml_legend=1 00:37:51.849 --rc geninfo_all_blocks=1 00:37:51.849 --rc geninfo_unexecuted_blocks=1 00:37:51.849 00:37:51.849 ' 00:37:51.849 10:48:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:37:51.849 OK 00:37:51.849 10:48:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:37:51.849 00:37:51.849 real 0m0.219s 00:37:51.849 user 0m0.125s 00:37:51.849 sys 0m0.110s 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.849 10:48:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:37:51.849 ************************************ 00:37:51.849 END TEST rpc_client 00:37:51.849 ************************************ 00:37:51.849 10:48:52 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:37:51.849 10:48:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:51.849 10:48:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.849 10:48:52 -- common/autotest_common.sh@10 -- # set +x 00:37:51.849 ************************************ 00:37:51.849 START TEST json_config 00:37:51.849 ************************************ 00:37:51.849 10:48:53 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:37:52.112 10:48:53 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:52.112 10:48:53 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:37:52.112 10:48:53 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:52.112 10:48:53 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:52.112 10:48:53 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:52.112 10:48:53 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:52.112 10:48:53 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:52.112 10:48:53 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:37:52.112 10:48:53 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:37:52.112 10:48:53 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:37:52.112 10:48:53 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:37:52.112 10:48:53 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:37:52.112 10:48:53 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:37:52.112 10:48:53 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:37:52.112 10:48:53 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:52.112 10:48:53 json_config -- scripts/common.sh@344 -- # case "$op" in 00:37:52.112 10:48:53 json_config -- scripts/common.sh@345 -- # : 1 00:37:52.112 10:48:53 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:52.112 10:48:53 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:52.112 10:48:53 json_config -- scripts/common.sh@365 -- # decimal 1 00:37:52.112 10:48:53 json_config -- scripts/common.sh@353 -- # local d=1 00:37:52.112 10:48:53 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:52.112 10:48:53 json_config -- scripts/common.sh@355 -- # echo 1 00:37:52.112 10:48:53 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:37:52.112 10:48:53 json_config -- scripts/common.sh@366 -- # decimal 2 00:37:52.112 10:48:53 json_config -- scripts/common.sh@353 -- # local d=2 00:37:52.112 10:48:53 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:52.112 10:48:53 json_config -- scripts/common.sh@355 -- # echo 2 00:37:52.112 10:48:53 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:37:52.112 10:48:53 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:52.112 10:48:53 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:52.112 10:48:53 json_config -- scripts/common.sh@368 -- # return 0 00:37:52.112 10:48:53 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:52.112 10:48:53 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:52.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.112 --rc genhtml_branch_coverage=1 00:37:52.112 --rc genhtml_function_coverage=1 00:37:52.112 --rc genhtml_legend=1 00:37:52.112 --rc geninfo_all_blocks=1 00:37:52.112 --rc geninfo_unexecuted_blocks=1 00:37:52.112 00:37:52.112 ' 00:37:52.112 10:48:53 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:52.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.112 --rc genhtml_branch_coverage=1 00:37:52.112 --rc genhtml_function_coverage=1 00:37:52.112 --rc genhtml_legend=1 00:37:52.112 --rc geninfo_all_blocks=1 00:37:52.112 --rc geninfo_unexecuted_blocks=1 00:37:52.112 00:37:52.112 ' 00:37:52.112 10:48:53 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:52.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.112 --rc genhtml_branch_coverage=1 00:37:52.112 --rc genhtml_function_coverage=1 00:37:52.112 --rc genhtml_legend=1 00:37:52.112 --rc geninfo_all_blocks=1 00:37:52.112 --rc geninfo_unexecuted_blocks=1 00:37:52.112 00:37:52.112 ' 00:37:52.112 10:48:53 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:52.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.112 --rc genhtml_branch_coverage=1 00:37:52.112 --rc genhtml_function_coverage=1 00:37:52.112 --rc genhtml_legend=1 00:37:52.112 --rc geninfo_all_blocks=1 00:37:52.112 --rc geninfo_unexecuted_blocks=1 00:37:52.112 00:37:52.112 ' 00:37:52.112 10:48:53 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:52.112 10:48:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:52.113 10:48:53 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:37:52.113 10:48:53 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:52.113 10:48:53 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:52.113 10:48:53 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:52.113 10:48:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.113 10:48:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.113 10:48:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.113 10:48:53 json_config -- paths/export.sh@5 -- # export PATH 00:37:52.113 10:48:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@51 -- # : 0 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:52.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:52.113 10:48:53 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:37:52.113 INFO: JSON configuration test init 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:37:52.113 10:48:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:52.113 10:48:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:37:52.113 10:48:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:52.113 10:48:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:52.113 10:48:53 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:37:52.113 10:48:53 json_config -- json_config/common.sh@9 -- # local app=target 00:37:52.113 10:48:53 json_config -- json_config/common.sh@10 -- # shift 00:37:52.113 10:48:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:37:52.113 10:48:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:37:52.113 10:48:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:37:52.113 10:48:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:37:52.113 10:48:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:37:52.113 10:48:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2236654 00:37:52.113 10:48:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:37:52.113 Waiting for target to run... 00:37:52.113 10:48:53 json_config -- json_config/common.sh@25 -- # waitforlisten 2236654 /var/tmp/spdk_tgt.sock 00:37:52.113 10:48:53 json_config -- common/autotest_common.sh@835 -- # '[' -z 2236654 ']' 00:37:52.113 10:48:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:37:52.113 10:48:53 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:37:52.113 10:48:53 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:52.113 10:48:53 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:37:52.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:37:52.113 10:48:53 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:52.113 10:48:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:52.375 [2024-12-09 10:48:53.288063] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:37:52.375 [2024-12-09 10:48:53.288139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236654 ] 00:37:52.636 [2024-12-09 10:48:53.657845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.636 [2024-12-09 10:48:53.701743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.207 10:48:54 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:53.207 10:48:54 json_config -- common/autotest_common.sh@868 -- # return 0 00:37:53.207 10:48:54 json_config -- json_config/common.sh@26 -- # echo '' 00:37:53.207 00:37:53.207 10:48:54 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:37:53.207 10:48:54 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:37:53.207 10:48:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:53.207 10:48:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:53.207 10:48:54 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:37:53.207 10:48:54 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:37:53.207 10:48:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:53.207 10:48:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:53.207 10:48:54 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:37:53.207 10:48:54 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:37:53.207 10:48:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:37:56.513 10:48:57 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:37:56.513 10:48:57 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:37:56.513 10:48:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.513 10:48:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:56.513 10:48:57 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:37:56.513 10:48:57 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:37:56.513 10:48:57 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:37:56.513 10:48:57 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:37:56.513 10:48:57 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:37:56.513 10:48:57 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:37:56.513 10:48:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:37:56.513 10:48:57 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@51 -- # local get_types 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@54 -- # sort 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:37:56.774 10:48:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:56.774 10:48:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@62 -- # return 0 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:37:56.774 10:48:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.774 10:48:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:37:56.774 10:48:57 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:37:56.774 10:48:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:37:57.036 MallocForNvmf0 00:37:57.036 10:48:58 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:37:57.036 10:48:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:37:57.297 MallocForNvmf1 00:37:57.297 10:48:58 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:37:57.297 10:48:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:37:57.558 [2024-12-09 10:48:58.692055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:57.558 10:48:58 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:57.558 10:48:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:57.819 10:48:58 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:37:57.819 10:48:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:37:58.391 10:48:59 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:37:58.391 10:48:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:37:58.391 10:48:59 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:37:58.391 10:48:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:37:58.652 [2024-12-09 10:48:59.791417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:58.652 10:48:59 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:37:58.652 10:48:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:58.652 10:48:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:58.913 10:48:59 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:37:58.913 10:48:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:58.913 10:48:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:58.913 10:48:59 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:37:58.913 10:48:59 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:37:58.913 10:48:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:37:59.173 MallocBdevForConfigChangeCheck 00:37:59.173 10:49:00 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:37:59.173 10:49:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.173 10:49:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:37:59.173 10:49:00 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:37:59.173 10:49:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:37:59.744 10:49:00 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:37:59.744 INFO: shutting down applications... 00:37:59.744 10:49:00 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:37:59.744 10:49:00 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:37:59.744 10:49:00 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:37:59.744 10:49:00 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:38:03.953 Calling clear_iscsi_subsystem 00:38:03.953 Calling clear_nvmf_subsystem 00:38:03.953 Calling clear_nbd_subsystem 00:38:03.953 Calling clear_ublk_subsystem 00:38:03.953 Calling clear_vhost_blk_subsystem 00:38:03.953 Calling clear_vhost_scsi_subsystem 00:38:03.953 Calling clear_bdev_subsystem 00:38:03.953 10:49:04 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:38:03.953 10:49:04 json_config -- json_config/json_config.sh@350 -- # count=100 00:38:03.953 10:49:04 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:38:03.953 10:49:04 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:38:03.953 10:49:04 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:38:03.954 10:49:04 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:38:03.954 10:49:04 json_config -- json_config/json_config.sh@352 -- # break 00:38:03.954 10:49:04 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:38:03.954 10:49:04 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:38:03.954 10:49:04 json_config -- json_config/common.sh@31 -- # local app=target 00:38:03.954 10:49:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:38:03.954 10:49:04 json_config -- json_config/common.sh@35 -- # [[ -n 2236654 ]] 00:38:03.954 10:49:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2236654 00:38:03.954 10:49:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:38:03.954 10:49:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:38:03.954 10:49:05 json_config -- json_config/common.sh@41 -- # kill -0 2236654 00:38:03.954 10:49:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:38:04.523 10:49:05 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:38:04.523 10:49:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:38:04.523 10:49:05 json_config -- json_config/common.sh@41 -- # kill -0 2236654 00:38:04.523 10:49:05 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:38:04.523 10:49:05 json_config -- json_config/common.sh@43 -- # break 00:38:04.523 10:49:05 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:38:04.523 10:49:05 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:38:04.523 SPDK target shutdown done 00:38:04.523 10:49:05 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:38:04.523 INFO: relaunching applications... 00:38:04.523 10:49:05 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:38:04.523 10:49:05 json_config -- json_config/common.sh@9 -- # local app=target 00:38:04.523 10:49:05 json_config -- json_config/common.sh@10 -- # shift 00:38:04.523 10:49:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:38:04.523 10:49:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:38:04.523 10:49:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:38:04.523 10:49:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:38:04.523 10:49:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:38:04.523 10:49:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2238423 00:38:04.523 10:49:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:38:04.523 Waiting for target to run... 00:38:04.523 10:49:05 json_config -- json_config/common.sh@25 -- # waitforlisten 2238423 /var/tmp/spdk_tgt.sock 00:38:04.523 10:49:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:38:04.523 10:49:05 json_config -- common/autotest_common.sh@835 -- # '[' -z 2238423 ']' 00:38:04.523 10:49:05 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:38:04.523 10:49:05 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:04.523 10:49:05 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:38:04.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:38:04.523 10:49:05 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:04.523 10:49:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:38:04.523 [2024-12-09 10:49:05.577876] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:04.523 [2024-12-09 10:49:05.577967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238423 ] 00:38:05.094 [2024-12-09 10:49:06.120585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.094 [2024-12-09 10:49:06.185106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.393 [2024-12-09 10:49:09.251298] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:08.393 [2024-12-09 10:49:09.283572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:08.977 10:49:10 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:08.977 10:49:10 json_config -- common/autotest_common.sh@868 -- # return 0 00:38:08.977 10:49:10 json_config -- json_config/common.sh@26 -- # echo '' 00:38:08.977 00:38:08.977 10:49:10 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:38:08.977 10:49:10 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:38:08.977 INFO: Checking if target configuration is the same... 00:38:08.977 10:49:10 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:38:08.977 10:49:10 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:38:08.977 10:49:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:38:08.977 + '[' 2 -ne 2 ']' 00:38:08.977 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:38:08.977 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:38:08.977 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:08.977 +++ basename /dev/fd/62 00:38:08.977 ++ mktemp /tmp/62.XXX 00:38:08.977 + tmp_file_1=/tmp/62.xts 00:38:08.977 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:38:08.977 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:38:08.977 + tmp_file_2=/tmp/spdk_tgt_config.json.Y2d 00:38:08.977 + ret=0 00:38:08.977 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:38:09.553 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:38:09.553 + diff -u /tmp/62.xts /tmp/spdk_tgt_config.json.Y2d 00:38:09.553 + echo 'INFO: JSON config files are the same' 00:38:09.553 INFO: JSON config files are the same 00:38:09.553 + rm /tmp/62.xts /tmp/spdk_tgt_config.json.Y2d 00:38:09.553 + exit 0 00:38:09.553 10:49:10 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:38:09.553 10:49:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:38:09.553 INFO: changing configuration and checking if this can be detected... 00:38:09.553 10:49:10 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:38:09.553 10:49:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:38:09.812 10:49:10 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:38:09.812 10:49:10 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:38:09.812 10:49:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:38:09.812 + '[' 2 -ne 2 ']' 00:38:09.812 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:38:09.812 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:38:09.812 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:09.812 +++ basename /dev/fd/62 00:38:09.812 ++ mktemp /tmp/62.XXX 00:38:09.812 + tmp_file_1=/tmp/62.FMF 00:38:09.812 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:38:09.812 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:38:09.812 + tmp_file_2=/tmp/spdk_tgt_config.json.tBR 00:38:09.812 + ret=0 00:38:09.812 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:38:10.382 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:38:10.382 + diff -u /tmp/62.FMF /tmp/spdk_tgt_config.json.tBR 00:38:10.382 + ret=1 00:38:10.382 + echo '=== Start of file: /tmp/62.FMF ===' 00:38:10.382 + cat /tmp/62.FMF 00:38:10.382 + echo '=== End of file: /tmp/62.FMF ===' 00:38:10.382 + echo '' 00:38:10.382 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tBR ===' 00:38:10.382 + cat /tmp/spdk_tgt_config.json.tBR 00:38:10.382 + echo '=== End of file: /tmp/spdk_tgt_config.json.tBR ===' 00:38:10.382 + echo '' 00:38:10.382 + rm /tmp/62.FMF /tmp/spdk_tgt_config.json.tBR 00:38:10.382 + exit 1 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:38:10.382 INFO: configuration change detected. 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@324 -- # [[ -n 2238423 ]] 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@200 -- # uname -s 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:38:10.382 10:49:11 json_config -- json_config/json_config.sh@330 -- # killprocess 2238423 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@954 -- # '[' -z 2238423 ']' 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@958 -- # kill -0 2238423 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@959 -- # uname 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2238423 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2238423' 00:38:10.382 killing process with pid 2238423 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@973 -- # kill 2238423 00:38:10.382 10:49:11 json_config -- common/autotest_common.sh@978 -- # wait 2238423 00:38:14.584 10:49:15 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:38:14.584 10:49:15 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:38:14.584 10:49:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:14.584 10:49:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:38:14.584 10:49:15 json_config -- json_config/json_config.sh@335 -- # return 0 00:38:14.584 10:49:15 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:38:14.584 INFO: Success 00:38:14.584 00:38:14.584 real 0m22.419s 00:38:14.584 user 0m24.500s 00:38:14.584 sys 0m3.050s 00:38:14.584 10:49:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.584 10:49:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:38:14.584 ************************************ 00:38:14.584 END TEST json_config 00:38:14.584 ************************************ 00:38:14.584 10:49:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:38:14.584 10:49:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:14.584 10:49:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:14.584 10:49:15 -- common/autotest_common.sh@10 -- # set +x 00:38:14.584 ************************************ 00:38:14.584 START TEST json_config_extra_key 00:38:14.584 ************************************ 00:38:14.584 10:49:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:38:14.584 10:49:15 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:14.584 10:49:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:38:14.584 10:49:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:14.584 10:49:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:14.584 10:49:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:38:14.584 10:49:15 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:14.584 10:49:15 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:14.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.584 --rc genhtml_branch_coverage=1 00:38:14.584 --rc genhtml_function_coverage=1 00:38:14.584 --rc genhtml_legend=1 00:38:14.584 --rc geninfo_all_blocks=1 00:38:14.584 --rc geninfo_unexecuted_blocks=1 00:38:14.584 00:38:14.584 ' 00:38:14.584 10:49:15 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:14.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.584 --rc genhtml_branch_coverage=1 00:38:14.584 --rc genhtml_function_coverage=1 00:38:14.584 --rc genhtml_legend=1 00:38:14.584 --rc geninfo_all_blocks=1 00:38:14.584 --rc geninfo_unexecuted_blocks=1 00:38:14.584 00:38:14.584 ' 00:38:14.584 10:49:15 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:14.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.584 --rc genhtml_branch_coverage=1 00:38:14.584 --rc genhtml_function_coverage=1 00:38:14.584 --rc genhtml_legend=1 00:38:14.584 --rc geninfo_all_blocks=1 00:38:14.584 --rc geninfo_unexecuted_blocks=1 00:38:14.584 00:38:14.584 ' 00:38:14.584 10:49:15 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:14.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.584 --rc genhtml_branch_coverage=1 00:38:14.584 --rc genhtml_function_coverage=1 00:38:14.584 --rc genhtml_legend=1 00:38:14.584 --rc geninfo_all_blocks=1 00:38:14.584 --rc geninfo_unexecuted_blocks=1 00:38:14.584 00:38:14.584 ' 00:38:14.584 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:14.584 10:49:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:38:14.584 10:49:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:14.584 10:49:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:14.584 10:49:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:14.584 10:49:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:14.584 10:49:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:14.584 10:49:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:14.584 10:49:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:14.584 10:49:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:14.584 10:49:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:14.585 10:49:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:38:14.585 10:49:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:14.585 10:49:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.585 10:49:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.585 10:49:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.585 10:49:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.585 10:49:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.585 10:49:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:38:14.585 10:49:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:14.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:14.585 10:49:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:38:14.585 INFO: launching applications... 00:38:14.585 10:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2239875 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:38:14.585 Waiting for target to run... 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2239875 /var/tmp/spdk_tgt.sock 00:38:14.585 10:49:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2239875 ']' 00:38:14.585 10:49:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:38:14.585 10:49:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:38:14.585 10:49:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:14.585 10:49:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:38:14.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:38:14.585 10:49:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:14.585 10:49:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:38:14.846 [2024-12-09 10:49:15.791104] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:14.846 [2024-12-09 10:49:15.791165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239875 ] 00:38:15.106 [2024-12-09 10:49:16.281362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.367 [2024-12-09 10:49:16.340445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.628 10:49:16 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:15.628 10:49:16 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:38:15.628 10:49:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:38:15.628 00:38:15.628 10:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:38:15.629 INFO: shutting down applications... 00:38:15.629 10:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:38:15.629 10:49:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:38:15.629 10:49:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:38:15.629 10:49:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2239875 ]] 00:38:15.629 10:49:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2239875 00:38:15.629 10:49:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:38:15.629 10:49:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:38:15.629 10:49:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2239875 00:38:15.629 10:49:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:38:16.201 10:49:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:38:16.201 10:49:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:38:16.201 10:49:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2239875 00:38:16.201 10:49:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:38:16.201 10:49:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:38:16.201 10:49:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:38:16.201 10:49:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:38:16.201 SPDK target shutdown done 00:38:16.201 10:49:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:38:16.201 Success 00:38:16.201 00:38:16.201 real 0m1.780s 00:38:16.201 user 0m1.589s 00:38:16.201 sys 0m0.650s 00:38:16.201 10:49:17 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:16.201 10:49:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:38:16.201 ************************************ 00:38:16.201 END TEST json_config_extra_key 00:38:16.201 ************************************ 00:38:16.201 10:49:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:38:16.201 10:49:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:16.201 10:49:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:16.201 10:49:17 -- common/autotest_common.sh@10 -- # set +x 00:38:16.462 ************************************ 00:38:16.462 START TEST alias_rpc 00:38:16.462 ************************************ 00:38:16.462 10:49:17 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:38:16.462 * Looking for test storage... 00:38:16.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:38:16.462 10:49:17 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:16.462 10:49:17 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:38:16.462 10:49:17 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:16.462 10:49:17 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@345 -- # : 1 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:16.462 10:49:17 alias_rpc -- scripts/common.sh@368 -- # return 0 00:38:16.462 10:49:17 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:16.462 10:49:17 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:16.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.462 --rc genhtml_branch_coverage=1 00:38:16.462 --rc genhtml_function_coverage=1 00:38:16.462 --rc genhtml_legend=1 00:38:16.463 --rc geninfo_all_blocks=1 00:38:16.463 --rc geninfo_unexecuted_blocks=1 00:38:16.463 00:38:16.463 ' 00:38:16.463 10:49:17 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:16.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.463 --rc genhtml_branch_coverage=1 00:38:16.463 --rc genhtml_function_coverage=1 00:38:16.463 --rc genhtml_legend=1 00:38:16.463 --rc geninfo_all_blocks=1 00:38:16.463 --rc geninfo_unexecuted_blocks=1 00:38:16.463 00:38:16.463 ' 00:38:16.463 10:49:17 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:16.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.463 --rc genhtml_branch_coverage=1 00:38:16.463 --rc genhtml_function_coverage=1 00:38:16.463 --rc genhtml_legend=1 00:38:16.463 --rc geninfo_all_blocks=1 00:38:16.463 --rc geninfo_unexecuted_blocks=1 00:38:16.463 00:38:16.463 ' 00:38:16.463 10:49:17 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:16.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.463 --rc genhtml_branch_coverage=1 00:38:16.463 --rc genhtml_function_coverage=1 00:38:16.463 --rc genhtml_legend=1 00:38:16.463 --rc geninfo_all_blocks=1 00:38:16.463 --rc geninfo_unexecuted_blocks=1 00:38:16.463 00:38:16.463 ' 00:38:16.463 10:49:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:38:16.463 10:49:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2240282 00:38:16.463 10:49:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:16.463 10:49:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2240282 00:38:16.463 10:49:17 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2240282 ']' 00:38:16.463 10:49:17 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.463 10:49:17 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:16.463 10:49:17 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.463 10:49:17 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:16.463 10:49:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:16.724 [2024-12-09 10:49:17.677010] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:16.724 [2024-12-09 10:49:17.677090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240282 ] 00:38:16.724 [2024-12-09 10:49:17.802617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.724 [2024-12-09 10:49:17.854428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:17.666 10:49:18 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:17.666 10:49:18 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:17.666 10:49:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:38:17.926 10:49:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2240282 00:38:17.926 10:49:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2240282 ']' 00:38:17.926 10:49:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2240282 00:38:17.926 10:49:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:38:17.926 10:49:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:17.926 10:49:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2240282 00:38:17.926 10:49:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:17.926 10:49:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:17.926 10:49:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2240282' 00:38:17.926 killing process with pid 2240282 00:38:17.926 10:49:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 2240282 00:38:17.926 10:49:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 2240282 00:38:18.188 00:38:18.188 real 0m1.972s 00:38:18.188 user 0m2.192s 00:38:18.188 sys 0m0.559s 00:38:18.188 10:49:19 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:18.188 10:49:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:18.188 ************************************ 00:38:18.188 END TEST alias_rpc 00:38:18.188 ************************************ 00:38:18.449 10:49:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:38:18.449 10:49:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:38:18.449 10:49:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:18.449 10:49:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:18.449 10:49:19 -- common/autotest_common.sh@10 -- # set +x 00:38:18.449 ************************************ 00:38:18.449 START TEST spdkcli_tcp 00:38:18.449 ************************************ 00:38:18.449 10:49:19 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:38:18.449 * Looking for test storage... 00:38:18.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:38:18.449 10:49:19 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:18.449 10:49:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:38:18.449 10:49:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:18.449 10:49:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:18.449 10:49:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:38:18.449 10:49:19 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:18.449 10:49:19 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:18.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.449 --rc genhtml_branch_coverage=1 00:38:18.449 --rc genhtml_function_coverage=1 00:38:18.449 --rc genhtml_legend=1 00:38:18.449 --rc geninfo_all_blocks=1 00:38:18.449 --rc geninfo_unexecuted_blocks=1 00:38:18.449 00:38:18.449 ' 00:38:18.449 10:49:19 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:18.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.449 --rc genhtml_branch_coverage=1 00:38:18.449 --rc genhtml_function_coverage=1 00:38:18.449 --rc genhtml_legend=1 00:38:18.449 --rc geninfo_all_blocks=1 00:38:18.449 --rc geninfo_unexecuted_blocks=1 00:38:18.449 00:38:18.449 ' 00:38:18.449 10:49:19 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:18.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.449 --rc genhtml_branch_coverage=1 00:38:18.449 --rc genhtml_function_coverage=1 00:38:18.449 --rc genhtml_legend=1 00:38:18.449 --rc geninfo_all_blocks=1 00:38:18.449 --rc geninfo_unexecuted_blocks=1 00:38:18.449 00:38:18.449 ' 00:38:18.449 10:49:19 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:18.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.450 --rc genhtml_branch_coverage=1 00:38:18.450 --rc genhtml_function_coverage=1 00:38:18.450 --rc genhtml_legend=1 00:38:18.450 --rc geninfo_all_blocks=1 00:38:18.450 --rc geninfo_unexecuted_blocks=1 00:38:18.450 00:38:18.450 ' 00:38:18.450 10:49:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:38:18.450 10:49:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:18.450 10:49:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:38:18.450 10:49:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:38:18.450 10:49:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:38:18.450 10:49:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:18.450 10:49:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:38:18.450 10:49:19 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:18.450 10:49:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:18.798 10:49:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2240536 00:38:18.798 10:49:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2240536 00:38:18.798 10:49:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2240536 ']' 00:38:18.798 10:49:19 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.798 10:49:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:18.798 10:49:19 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.798 10:49:19 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:18.798 10:49:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:18.798 10:49:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:38:18.798 [2024-12-09 10:49:19.696246] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:18.798 [2024-12-09 10:49:19.696334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240536 ] 00:38:18.798 [2024-12-09 10:49:19.822361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:18.798 [2024-12-09 10:49:19.878033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.798 [2024-12-09 10:49:19.878039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.739 10:49:20 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:19.739 10:49:20 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:38:19.739 10:49:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2240711 00:38:19.739 10:49:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:38:19.739 10:49:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:38:19.739 [ 00:38:19.739 "bdev_malloc_delete", 00:38:19.739 "bdev_malloc_create", 00:38:19.739 "bdev_null_resize", 00:38:19.739 "bdev_null_delete", 00:38:19.739 "bdev_null_create", 00:38:19.739 "bdev_nvme_cuse_unregister", 00:38:19.739 "bdev_nvme_cuse_register", 00:38:19.739 "bdev_opal_new_user", 00:38:19.739 "bdev_opal_set_lock_state", 00:38:19.739 "bdev_opal_delete", 00:38:19.739 "bdev_opal_get_info", 00:38:19.739 "bdev_opal_create", 00:38:19.739 "bdev_nvme_opal_revert", 00:38:19.739 "bdev_nvme_opal_init", 00:38:19.739 "bdev_nvme_send_cmd", 00:38:19.739 "bdev_nvme_set_keys", 00:38:19.739 "bdev_nvme_get_path_iostat", 00:38:19.739 "bdev_nvme_get_mdns_discovery_info", 00:38:19.739 "bdev_nvme_stop_mdns_discovery", 00:38:19.739 "bdev_nvme_start_mdns_discovery", 00:38:19.739 "bdev_nvme_set_multipath_policy", 00:38:19.739 "bdev_nvme_set_preferred_path", 00:38:19.739 "bdev_nvme_get_io_paths", 00:38:19.739 "bdev_nvme_remove_error_injection", 00:38:19.739 "bdev_nvme_add_error_injection", 00:38:19.739 "bdev_nvme_get_discovery_info", 00:38:19.739 "bdev_nvme_stop_discovery", 00:38:19.739 "bdev_nvme_start_discovery", 00:38:19.739 "bdev_nvme_get_controller_health_info", 00:38:19.739 "bdev_nvme_disable_controller", 00:38:19.739 "bdev_nvme_enable_controller", 00:38:19.739 "bdev_nvme_reset_controller", 00:38:19.739 "bdev_nvme_get_transport_statistics", 00:38:19.739 "bdev_nvme_apply_firmware", 00:38:19.739 "bdev_nvme_detach_controller", 00:38:19.739 "bdev_nvme_get_controllers", 00:38:19.739 "bdev_nvme_attach_controller", 00:38:19.739 "bdev_nvme_set_hotplug", 00:38:19.739 "bdev_nvme_set_options", 00:38:19.739 "bdev_passthru_delete", 00:38:19.739 "bdev_passthru_create", 00:38:19.739 "bdev_lvol_set_parent_bdev", 00:38:19.739 "bdev_lvol_set_parent", 00:38:19.739 "bdev_lvol_check_shallow_copy", 00:38:19.739 "bdev_lvol_start_shallow_copy", 00:38:19.739 "bdev_lvol_grow_lvstore", 00:38:19.739 "bdev_lvol_get_lvols", 00:38:19.739 "bdev_lvol_get_lvstores", 00:38:19.739 "bdev_lvol_delete", 00:38:19.739 "bdev_lvol_set_read_only", 00:38:19.739 "bdev_lvol_resize", 00:38:19.739 "bdev_lvol_decouple_parent", 00:38:19.739 "bdev_lvol_inflate", 00:38:19.739 "bdev_lvol_rename", 00:38:19.739 "bdev_lvol_clone_bdev", 00:38:19.739 "bdev_lvol_clone", 00:38:19.739 "bdev_lvol_snapshot", 00:38:19.739 "bdev_lvol_create", 00:38:19.739 "bdev_lvol_delete_lvstore", 00:38:19.739 "bdev_lvol_rename_lvstore", 00:38:19.739 "bdev_lvol_create_lvstore", 00:38:19.739 "bdev_raid_set_options", 00:38:19.739 "bdev_raid_remove_base_bdev", 00:38:19.739 "bdev_raid_add_base_bdev", 00:38:19.739 "bdev_raid_delete", 00:38:19.739 "bdev_raid_create", 00:38:19.739 "bdev_raid_get_bdevs", 00:38:19.739 "bdev_error_inject_error", 00:38:19.739 "bdev_error_delete", 00:38:19.739 "bdev_error_create", 00:38:19.739 "bdev_split_delete", 00:38:19.739 "bdev_split_create", 00:38:19.739 "bdev_delay_delete", 00:38:19.739 "bdev_delay_create", 00:38:19.739 "bdev_delay_update_latency", 00:38:19.739 "bdev_zone_block_delete", 00:38:19.739 "bdev_zone_block_create", 00:38:19.739 "blobfs_create", 00:38:19.739 "blobfs_detect", 00:38:19.739 "blobfs_set_cache_size", 00:38:19.739 "bdev_aio_delete", 00:38:19.739 "bdev_aio_rescan", 00:38:19.739 "bdev_aio_create", 00:38:19.739 "bdev_ftl_set_property", 00:38:19.739 "bdev_ftl_get_properties", 00:38:19.739 "bdev_ftl_get_stats", 00:38:19.739 "bdev_ftl_unmap", 00:38:19.739 "bdev_ftl_unload", 00:38:19.739 "bdev_ftl_delete", 00:38:19.739 "bdev_ftl_load", 00:38:19.739 "bdev_ftl_create", 00:38:19.739 "bdev_virtio_attach_controller", 00:38:19.739 "bdev_virtio_scsi_get_devices", 00:38:19.739 "bdev_virtio_detach_controller", 00:38:19.739 "bdev_virtio_blk_set_hotplug", 00:38:19.739 "bdev_iscsi_delete", 00:38:19.739 "bdev_iscsi_create", 00:38:19.739 "bdev_iscsi_set_options", 00:38:19.739 "accel_error_inject_error", 00:38:19.739 "ioat_scan_accel_module", 00:38:19.739 "dsa_scan_accel_module", 00:38:19.739 "iaa_scan_accel_module", 00:38:19.739 "vfu_virtio_create_fs_endpoint", 00:38:19.739 "vfu_virtio_create_scsi_endpoint", 00:38:19.739 "vfu_virtio_scsi_remove_target", 00:38:19.739 "vfu_virtio_scsi_add_target", 00:38:19.739 "vfu_virtio_create_blk_endpoint", 00:38:19.739 "vfu_virtio_delete_endpoint", 00:38:19.739 "keyring_file_remove_key", 00:38:19.739 "keyring_file_add_key", 00:38:19.739 "keyring_linux_set_options", 00:38:19.739 "fsdev_aio_delete", 00:38:19.739 "fsdev_aio_create", 00:38:19.740 "iscsi_get_histogram", 00:38:19.740 "iscsi_enable_histogram", 00:38:19.740 "iscsi_set_options", 00:38:19.740 "iscsi_get_auth_groups", 00:38:19.740 "iscsi_auth_group_remove_secret", 00:38:19.740 "iscsi_auth_group_add_secret", 00:38:19.740 "iscsi_delete_auth_group", 00:38:19.740 "iscsi_create_auth_group", 00:38:19.740 "iscsi_set_discovery_auth", 00:38:19.740 "iscsi_get_options", 00:38:19.740 "iscsi_target_node_request_logout", 00:38:19.740 "iscsi_target_node_set_redirect", 00:38:19.740 "iscsi_target_node_set_auth", 00:38:19.740 "iscsi_target_node_add_lun", 00:38:19.740 "iscsi_get_stats", 00:38:19.740 "iscsi_get_connections", 00:38:19.740 "iscsi_portal_group_set_auth", 00:38:19.740 "iscsi_start_portal_group", 00:38:19.740 "iscsi_delete_portal_group", 00:38:19.740 "iscsi_create_portal_group", 00:38:19.740 "iscsi_get_portal_groups", 00:38:19.740 "iscsi_delete_target_node", 00:38:19.740 "iscsi_target_node_remove_pg_ig_maps", 00:38:19.740 "iscsi_target_node_add_pg_ig_maps", 00:38:19.740 "iscsi_create_target_node", 00:38:19.740 "iscsi_get_target_nodes", 00:38:19.740 "iscsi_delete_initiator_group", 00:38:19.740 "iscsi_initiator_group_remove_initiators", 00:38:19.740 "iscsi_initiator_group_add_initiators", 00:38:19.740 "iscsi_create_initiator_group", 00:38:19.740 "iscsi_get_initiator_groups", 00:38:19.740 "nvmf_set_crdt", 00:38:19.740 "nvmf_set_config", 00:38:19.740 "nvmf_set_max_subsystems", 00:38:19.740 "nvmf_stop_mdns_prr", 00:38:19.740 "nvmf_publish_mdns_prr", 00:38:19.740 "nvmf_subsystem_get_listeners", 00:38:19.740 "nvmf_subsystem_get_qpairs", 00:38:19.740 "nvmf_subsystem_get_controllers", 00:38:19.740 "nvmf_get_stats", 00:38:19.740 "nvmf_get_transports", 00:38:19.740 "nvmf_create_transport", 00:38:19.740 "nvmf_get_targets", 00:38:19.740 "nvmf_delete_target", 00:38:19.740 "nvmf_create_target", 00:38:19.740 "nvmf_subsystem_allow_any_host", 00:38:19.740 "nvmf_subsystem_set_keys", 00:38:19.740 "nvmf_subsystem_remove_host", 00:38:19.740 "nvmf_subsystem_add_host", 00:38:19.740 "nvmf_ns_remove_host", 00:38:19.740 "nvmf_ns_add_host", 00:38:19.740 "nvmf_subsystem_remove_ns", 00:38:19.740 "nvmf_subsystem_set_ns_ana_group", 00:38:19.740 "nvmf_subsystem_add_ns", 00:38:19.740 "nvmf_subsystem_listener_set_ana_state", 00:38:19.740 "nvmf_discovery_get_referrals", 00:38:19.740 "nvmf_discovery_remove_referral", 00:38:19.740 "nvmf_discovery_add_referral", 00:38:19.740 "nvmf_subsystem_remove_listener", 00:38:19.740 "nvmf_subsystem_add_listener", 00:38:19.740 "nvmf_delete_subsystem", 00:38:19.740 "nvmf_create_subsystem", 00:38:19.740 "nvmf_get_subsystems", 00:38:19.740 "env_dpdk_get_mem_stats", 00:38:19.740 "nbd_get_disks", 00:38:19.740 "nbd_stop_disk", 00:38:19.740 "nbd_start_disk", 00:38:19.740 "ublk_recover_disk", 00:38:19.740 "ublk_get_disks", 00:38:19.740 "ublk_stop_disk", 00:38:19.740 "ublk_start_disk", 00:38:19.740 "ublk_destroy_target", 00:38:19.740 "ublk_create_target", 00:38:19.740 "virtio_blk_create_transport", 00:38:19.740 "virtio_blk_get_transports", 00:38:19.740 "vhost_controller_set_coalescing", 00:38:19.740 "vhost_get_controllers", 00:38:19.740 "vhost_delete_controller", 00:38:19.740 "vhost_create_blk_controller", 00:38:19.740 "vhost_scsi_controller_remove_target", 00:38:19.740 "vhost_scsi_controller_add_target", 00:38:19.740 "vhost_start_scsi_controller", 00:38:19.740 "vhost_create_scsi_controller", 00:38:19.740 "thread_set_cpumask", 00:38:19.740 "scheduler_set_options", 00:38:19.740 "framework_get_governor", 00:38:19.740 "framework_get_scheduler", 00:38:19.740 "framework_set_scheduler", 00:38:19.740 "framework_get_reactors", 00:38:19.740 "thread_get_io_channels", 00:38:19.740 "thread_get_pollers", 00:38:19.740 "thread_get_stats", 00:38:19.740 "framework_monitor_context_switch", 00:38:19.740 "spdk_kill_instance", 00:38:19.740 "log_enable_timestamps", 00:38:19.740 "log_get_flags", 00:38:19.740 "log_clear_flag", 00:38:19.740 "log_set_flag", 00:38:19.740 "log_get_level", 00:38:19.740 "log_set_level", 00:38:19.740 "log_get_print_level", 00:38:19.740 "log_set_print_level", 00:38:19.740 "framework_enable_cpumask_locks", 00:38:19.740 "framework_disable_cpumask_locks", 00:38:19.740 "framework_wait_init", 00:38:19.740 "framework_start_init", 00:38:19.740 "scsi_get_devices", 00:38:19.740 "bdev_get_histogram", 00:38:19.740 "bdev_enable_histogram", 00:38:19.740 "bdev_set_qos_limit", 00:38:19.740 "bdev_set_qd_sampling_period", 00:38:19.740 "bdev_get_bdevs", 00:38:19.740 "bdev_reset_iostat", 00:38:19.740 "bdev_get_iostat", 00:38:19.740 "bdev_examine", 00:38:19.740 "bdev_wait_for_examine", 00:38:19.740 "bdev_set_options", 00:38:19.740 "accel_get_stats", 00:38:19.740 "accel_set_options", 00:38:19.740 "accel_set_driver", 00:38:19.740 "accel_crypto_key_destroy", 00:38:19.740 "accel_crypto_keys_get", 00:38:19.740 "accel_crypto_key_create", 00:38:19.740 "accel_assign_opc", 00:38:19.740 "accel_get_module_info", 00:38:19.740 "accel_get_opc_assignments", 00:38:19.740 "vmd_rescan", 00:38:19.740 "vmd_remove_device", 00:38:19.740 "vmd_enable", 00:38:19.740 "sock_get_default_impl", 00:38:19.740 "sock_set_default_impl", 00:38:19.740 "sock_impl_set_options", 00:38:19.740 "sock_impl_get_options", 00:38:19.740 "iobuf_get_stats", 00:38:19.740 "iobuf_set_options", 00:38:19.740 "keyring_get_keys", 00:38:19.740 "vfu_tgt_set_base_path", 00:38:19.740 "framework_get_pci_devices", 00:38:19.740 "framework_get_config", 00:38:19.740 "framework_get_subsystems", 00:38:19.740 "fsdev_set_opts", 00:38:19.740 "fsdev_get_opts", 00:38:19.740 "trace_get_info", 00:38:19.740 "trace_get_tpoint_group_mask", 00:38:19.740 "trace_disable_tpoint_group", 00:38:19.740 "trace_enable_tpoint_group", 00:38:19.740 "trace_clear_tpoint_mask", 00:38:19.740 "trace_set_tpoint_mask", 00:38:19.740 "notify_get_notifications", 00:38:19.740 "notify_get_types", 00:38:19.740 "spdk_get_version", 00:38:19.740 "rpc_get_methods" 00:38:19.740 ] 00:38:19.740 10:49:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:38:19.740 10:49:20 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:19.740 10:49:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:19.740 10:49:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:19.740 10:49:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2240536 00:38:19.740 10:49:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2240536 ']' 00:38:19.740 10:49:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2240536 00:38:19.740 10:49:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:38:19.740 10:49:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:19.740 10:49:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2240536 00:38:20.001 10:49:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:20.001 10:49:20 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:20.001 10:49:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2240536' 00:38:20.001 killing process with pid 2240536 00:38:20.001 10:49:20 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2240536 00:38:20.001 10:49:20 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2240536 00:38:20.261 00:38:20.261 real 0m1.947s 00:38:20.261 user 0m3.530s 00:38:20.261 sys 0m0.602s 00:38:20.261 10:49:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:20.261 10:49:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:20.261 ************************************ 00:38:20.261 END TEST spdkcli_tcp 00:38:20.261 ************************************ 00:38:20.261 10:49:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:38:20.261 10:49:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:20.261 10:49:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:20.261 10:49:21 -- common/autotest_common.sh@10 -- # set +x 00:38:20.521 ************************************ 00:38:20.521 START TEST dpdk_mem_utility 00:38:20.521 ************************************ 00:38:20.521 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:38:20.521 * Looking for test storage... 00:38:20.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:38:20.521 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:20.521 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:38:20.521 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:20.521 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:20.521 10:49:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:20.781 10:49:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:38:20.781 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:20.781 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:20.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.781 --rc genhtml_branch_coverage=1 00:38:20.781 --rc genhtml_function_coverage=1 00:38:20.781 --rc genhtml_legend=1 00:38:20.782 --rc geninfo_all_blocks=1 00:38:20.782 --rc geninfo_unexecuted_blocks=1 00:38:20.782 00:38:20.782 ' 00:38:20.782 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:20.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.782 --rc genhtml_branch_coverage=1 00:38:20.782 --rc genhtml_function_coverage=1 00:38:20.782 --rc genhtml_legend=1 00:38:20.782 --rc geninfo_all_blocks=1 00:38:20.782 --rc geninfo_unexecuted_blocks=1 00:38:20.782 00:38:20.782 ' 00:38:20.782 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:20.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.782 --rc genhtml_branch_coverage=1 00:38:20.782 --rc genhtml_function_coverage=1 00:38:20.782 --rc genhtml_legend=1 00:38:20.782 --rc geninfo_all_blocks=1 00:38:20.782 --rc geninfo_unexecuted_blocks=1 00:38:20.782 00:38:20.782 ' 00:38:20.782 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:20.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.782 --rc genhtml_branch_coverage=1 00:38:20.782 --rc genhtml_function_coverage=1 00:38:20.782 --rc genhtml_legend=1 00:38:20.782 --rc geninfo_all_blocks=1 00:38:20.782 --rc geninfo_unexecuted_blocks=1 00:38:20.782 00:38:20.782 ' 00:38:20.782 10:49:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:38:20.782 10:49:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2240956 00:38:20.782 10:49:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2240956 00:38:20.782 10:49:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:20.782 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2240956 ']' 00:38:20.782 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.782 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:20.782 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.782 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:20.782 10:49:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:38:20.782 [2024-12-09 10:49:21.764608] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:20.782 [2024-12-09 10:49:21.764697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240956 ] 00:38:20.782 [2024-12-09 10:49:21.892453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.782 [2024-12-09 10:49:21.946421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:21.042 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:21.042 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:38:21.042 10:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:38:21.042 10:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:38:21.042 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.042 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:38:21.042 { 00:38:21.042 "filename": "/tmp/spdk_mem_dump.txt" 00:38:21.042 } 00:38:21.042 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.042 10:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:38:21.305 DPDK memory size 818.000000 MiB in 1 heap(s) 00:38:21.305 1 heaps totaling size 818.000000 MiB 00:38:21.305 size: 818.000000 MiB heap id: 0 00:38:21.305 end heaps---------- 00:38:21.305 9 mempools totaling size 603.782043 MiB 00:38:21.305 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:38:21.305 size: 158.602051 MiB name: PDU_data_out_Pool 00:38:21.305 size: 100.555481 MiB name: bdev_io_2240956 00:38:21.305 size: 50.003479 MiB name: msgpool_2240956 00:38:21.305 size: 36.509338 MiB name: fsdev_io_2240956 00:38:21.305 size: 21.763794 MiB name: PDU_Pool 00:38:21.305 size: 19.513306 MiB name: SCSI_TASK_Pool 00:38:21.305 size: 4.133484 MiB name: evtpool_2240956 00:38:21.305 size: 0.026123 MiB name: Session_Pool 00:38:21.305 end mempools------- 00:38:21.305 6 memzones totaling size 4.142822 MiB 00:38:21.305 size: 1.000366 MiB name: RG_ring_0_2240956 00:38:21.305 size: 1.000366 MiB name: RG_ring_1_2240956 00:38:21.305 size: 1.000366 MiB name: RG_ring_4_2240956 00:38:21.305 size: 1.000366 MiB name: RG_ring_5_2240956 00:38:21.305 size: 0.125366 MiB name: RG_ring_2_2240956 00:38:21.305 size: 0.015991 MiB name: RG_ring_3_2240956 00:38:21.305 end memzones------- 00:38:21.305 10:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:38:21.305 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:38:21.305 list of free elements. size: 10.852478 MiB 00:38:21.305 element at address: 0x200019200000 with size: 0.999878 MiB 00:38:21.305 element at address: 0x200019400000 with size: 0.999878 MiB 00:38:21.305 element at address: 0x200000400000 with size: 0.998535 MiB 00:38:21.305 element at address: 0x200032000000 with size: 0.994446 MiB 00:38:21.305 element at address: 0x200006400000 with size: 0.959839 MiB 00:38:21.305 element at address: 0x200012c00000 with size: 0.944275 MiB 00:38:21.305 element at address: 0x200019600000 with size: 0.936584 MiB 00:38:21.305 element at address: 0x200000200000 with size: 0.717346 MiB 00:38:21.305 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:38:21.305 element at address: 0x200000c00000 with size: 0.495422 MiB 00:38:21.305 element at address: 0x20000a600000 with size: 0.490723 MiB 00:38:21.305 element at address: 0x200019800000 with size: 0.485657 MiB 00:38:21.305 element at address: 0x200003e00000 with size: 0.481934 MiB 00:38:21.305 element at address: 0x200028200000 with size: 0.410034 MiB 00:38:21.305 element at address: 0x200000800000 with size: 0.355042 MiB 00:38:21.305 list of standard malloc elements. size: 199.218628 MiB 00:38:21.305 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:38:21.305 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:38:21.305 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:38:21.305 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:38:21.305 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:38:21.305 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:38:21.305 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:38:21.305 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:38:21.305 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:38:21.305 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20000085b040 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20000085f300 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20000087f680 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:38:21.305 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:38:21.305 element at address: 0x200000cff000 with size: 0.000183 MiB 00:38:21.305 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:38:21.305 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:38:21.305 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:38:21.305 element at address: 0x200003efb980 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:38:21.305 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:38:21.305 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:38:21.305 element at address: 0x200028268f80 with size: 0.000183 MiB 00:38:21.305 element at address: 0x200028269040 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:38:21.305 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:38:21.305 list of memzone associated elements. size: 607.928894 MiB 00:38:21.305 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:38:21.305 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:38:21.305 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:38:21.305 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:38:21.305 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:38:21.305 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2240956_0 00:38:21.305 element at address: 0x200000dff380 with size: 48.003052 MiB 00:38:21.305 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2240956_0 00:38:21.305 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:38:21.305 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2240956_0 00:38:21.305 element at address: 0x2000199be940 with size: 20.255554 MiB 00:38:21.305 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:38:21.305 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:38:21.305 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:38:21.305 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:38:21.305 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2240956_0 00:38:21.305 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:38:21.305 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2240956 00:38:21.305 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:38:21.305 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2240956 00:38:21.305 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:38:21.305 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:38:21.305 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:38:21.305 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:38:21.305 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:38:21.305 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:38:21.305 element at address: 0x200003efba40 with size: 1.008118 MiB 00:38:21.305 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:38:21.305 element at address: 0x200000cff180 with size: 1.000488 MiB 00:38:21.305 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2240956 00:38:21.305 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:38:21.305 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2240956 00:38:21.305 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:38:21.305 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2240956 00:38:21.305 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:38:21.305 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2240956 00:38:21.305 element at address: 0x20000087f740 with size: 0.500488 MiB 00:38:21.305 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2240956 00:38:21.305 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:38:21.305 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2240956 00:38:21.305 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:38:21.305 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:38:21.306 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:38:21.306 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:38:21.306 element at address: 0x20001987c540 with size: 0.250488 MiB 00:38:21.306 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:38:21.306 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:38:21.306 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2240956 00:38:21.306 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:38:21.306 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2240956 00:38:21.306 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:38:21.306 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:38:21.306 element at address: 0x200028269100 with size: 0.023743 MiB 00:38:21.306 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:38:21.306 element at address: 0x20000085b100 with size: 0.016113 MiB 00:38:21.306 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2240956 00:38:21.306 element at address: 0x20002826f240 with size: 0.002441 MiB 00:38:21.306 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:38:21.306 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:38:21.306 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2240956 00:38:21.306 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:38:21.306 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2240956 00:38:21.306 element at address: 0x20000085af00 with size: 0.000305 MiB 00:38:21.306 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2240956 00:38:21.306 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:38:21.306 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:38:21.306 10:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:38:21.306 10:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2240956 00:38:21.306 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2240956 ']' 00:38:21.306 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2240956 00:38:21.306 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:38:21.306 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:21.306 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2240956 00:38:21.306 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:21.306 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:21.306 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2240956' 00:38:21.306 killing process with pid 2240956 00:38:21.306 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2240956 00:38:21.306 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2240956 00:38:21.876 00:38:21.876 real 0m1.363s 00:38:21.876 user 0m1.314s 00:38:21.876 sys 0m0.522s 00:38:21.876 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:21.876 10:49:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:38:21.876 ************************************ 00:38:21.876 END TEST dpdk_mem_utility 00:38:21.876 ************************************ 00:38:21.876 10:49:22 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:38:21.876 10:49:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:21.876 10:49:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:21.876 10:49:22 -- common/autotest_common.sh@10 -- # set +x 00:38:21.876 ************************************ 00:38:21.876 START TEST event 00:38:21.876 ************************************ 00:38:21.876 10:49:22 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:38:21.876 * Looking for test storage... 00:38:21.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:38:21.876 10:49:23 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:21.876 10:49:23 event -- common/autotest_common.sh@1711 -- # lcov --version 00:38:21.876 10:49:23 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:22.136 10:49:23 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:22.136 10:49:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:22.136 10:49:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:22.136 10:49:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:22.136 10:49:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:38:22.136 10:49:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:38:22.136 10:49:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:38:22.136 10:49:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:38:22.136 10:49:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:38:22.136 10:49:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:38:22.136 10:49:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:38:22.136 10:49:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:22.136 10:49:23 event -- scripts/common.sh@344 -- # case "$op" in 00:38:22.136 10:49:23 event -- scripts/common.sh@345 -- # : 1 00:38:22.136 10:49:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:22.136 10:49:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:22.136 10:49:23 event -- scripts/common.sh@365 -- # decimal 1 00:38:22.136 10:49:23 event -- scripts/common.sh@353 -- # local d=1 00:38:22.136 10:49:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:22.136 10:49:23 event -- scripts/common.sh@355 -- # echo 1 00:38:22.136 10:49:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:38:22.136 10:49:23 event -- scripts/common.sh@366 -- # decimal 2 00:38:22.136 10:49:23 event -- scripts/common.sh@353 -- # local d=2 00:38:22.136 10:49:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:22.136 10:49:23 event -- scripts/common.sh@355 -- # echo 2 00:38:22.136 10:49:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:38:22.136 10:49:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:22.136 10:49:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:22.136 10:49:23 event -- scripts/common.sh@368 -- # return 0 00:38:22.136 10:49:23 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:22.136 10:49:23 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:22.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.136 --rc genhtml_branch_coverage=1 00:38:22.136 --rc genhtml_function_coverage=1 00:38:22.136 --rc genhtml_legend=1 00:38:22.136 --rc geninfo_all_blocks=1 00:38:22.136 --rc geninfo_unexecuted_blocks=1 00:38:22.136 00:38:22.136 ' 00:38:22.136 10:49:23 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:22.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.136 --rc genhtml_branch_coverage=1 00:38:22.136 --rc genhtml_function_coverage=1 00:38:22.136 --rc genhtml_legend=1 00:38:22.136 --rc geninfo_all_blocks=1 00:38:22.136 --rc geninfo_unexecuted_blocks=1 00:38:22.136 00:38:22.136 ' 00:38:22.136 10:49:23 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:22.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.136 --rc genhtml_branch_coverage=1 00:38:22.136 --rc genhtml_function_coverage=1 00:38:22.136 --rc genhtml_legend=1 00:38:22.136 --rc geninfo_all_blocks=1 00:38:22.137 --rc geninfo_unexecuted_blocks=1 00:38:22.137 00:38:22.137 ' 00:38:22.137 10:49:23 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:22.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.137 --rc genhtml_branch_coverage=1 00:38:22.137 --rc genhtml_function_coverage=1 00:38:22.137 --rc genhtml_legend=1 00:38:22.137 --rc geninfo_all_blocks=1 00:38:22.137 --rc geninfo_unexecuted_blocks=1 00:38:22.137 00:38:22.137 ' 00:38:22.137 10:49:23 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:38:22.137 10:49:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:38:22.137 10:49:23 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:38:22.137 10:49:23 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:38:22.137 10:49:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:22.137 10:49:23 event -- common/autotest_common.sh@10 -- # set +x 00:38:22.137 ************************************ 00:38:22.137 START TEST event_perf 00:38:22.137 ************************************ 00:38:22.137 10:49:23 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:38:22.137 Running I/O for 1 seconds...[2024-12-09 10:49:23.187739] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:22.137 [2024-12-09 10:49:23.187812] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241202 ] 00:38:22.397 [2024-12-09 10:49:23.313671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:22.397 [2024-12-09 10:49:23.371998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:22.397 [2024-12-09 10:49:23.372089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:22.397 [2024-12-09 10:49:23.372178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:22.397 [2024-12-09 10:49:23.372182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.337 Running I/O for 1 seconds... 00:38:23.337 lcore 0: 187536 00:38:23.337 lcore 1: 187530 00:38:23.337 lcore 2: 187531 00:38:23.337 lcore 3: 187533 00:38:23.337 done. 00:38:23.337 00:38:23.337 real 0m1.318s 00:38:23.337 user 0m4.202s 00:38:23.337 sys 0m0.110s 00:38:23.337 10:49:24 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.337 10:49:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:38:23.337 ************************************ 00:38:23.337 END TEST event_perf 00:38:23.337 ************************************ 00:38:23.597 10:49:24 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:38:23.597 10:49:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:23.597 10:49:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.597 10:49:24 event -- common/autotest_common.sh@10 -- # set +x 00:38:23.597 ************************************ 00:38:23.597 START TEST event_reactor 00:38:23.597 ************************************ 00:38:23.597 10:49:24 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:38:23.597 [2024-12-09 10:49:24.576392] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:23.598 [2024-12-09 10:49:24.576454] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241405 ] 00:38:23.598 [2024-12-09 10:49:24.703338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.598 [2024-12-09 10:49:24.756374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.978 test_start 00:38:24.978 oneshot 00:38:24.978 tick 100 00:38:24.978 tick 100 00:38:24.978 tick 250 00:38:24.978 tick 100 00:38:24.978 tick 100 00:38:24.978 tick 250 00:38:24.978 tick 100 00:38:24.978 tick 500 00:38:24.978 tick 100 00:38:24.979 tick 100 00:38:24.979 tick 250 00:38:24.979 tick 100 00:38:24.979 tick 100 00:38:24.979 test_end 00:38:24.979 00:38:24.979 real 0m1.311s 00:38:24.979 user 0m1.205s 00:38:24.979 sys 0m0.099s 00:38:24.979 10:49:25 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:24.979 10:49:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:38:24.979 ************************************ 00:38:24.979 END TEST event_reactor 00:38:24.979 ************************************ 00:38:24.979 10:49:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:38:24.979 10:49:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:24.979 10:49:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:24.979 10:49:25 event -- common/autotest_common.sh@10 -- # set +x 00:38:24.979 ************************************ 00:38:24.979 START TEST event_reactor_perf 00:38:24.979 ************************************ 00:38:24.979 10:49:25 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:38:24.979 [2024-12-09 10:49:25.955631] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:24.979 [2024-12-09 10:49:25.955724] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241601 ] 00:38:24.979 [2024-12-09 10:49:26.081037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.979 [2024-12-09 10:49:26.134051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.363 test_start 00:38:26.363 test_end 00:38:26.363 Performance: 327196 events per second 00:38:26.363 00:38:26.363 real 0m1.311s 00:38:26.363 user 0m1.197s 00:38:26.363 sys 0m0.107s 00:38:26.363 10:49:27 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.363 10:49:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:38:26.363 ************************************ 00:38:26.363 END TEST event_reactor_perf 00:38:26.363 ************************************ 00:38:26.363 10:49:27 event -- event/event.sh@49 -- # uname -s 00:38:26.363 10:49:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:38:26.363 10:49:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:38:26.363 10:49:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:26.363 10:49:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.363 10:49:27 event -- common/autotest_common.sh@10 -- # set +x 00:38:26.363 ************************************ 00:38:26.363 START TEST event_scheduler 00:38:26.363 ************************************ 00:38:26.363 10:49:27 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:38:26.363 * Looking for test storage... 00:38:26.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:38:26.363 10:49:27 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:26.363 10:49:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:26.363 10:49:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:38:26.363 10:49:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:38:26.363 10:49:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:38:26.364 10:49:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:26.364 10:49:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:26.364 10:49:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:38:26.364 10:49:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:38:26.364 10:49:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:26.624 10:49:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:38:26.624 10:49:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:38:26.624 10:49:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:38:26.624 10:49:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:38:26.624 10:49:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:26.624 10:49:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:38:26.624 10:49:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:38:26.624 10:49:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:26.624 10:49:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:26.624 10:49:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:38:26.624 10:49:27 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:26.624 10:49:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:26.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.624 --rc genhtml_branch_coverage=1 00:38:26.624 --rc genhtml_function_coverage=1 00:38:26.624 --rc genhtml_legend=1 00:38:26.624 --rc geninfo_all_blocks=1 00:38:26.624 --rc geninfo_unexecuted_blocks=1 00:38:26.624 00:38:26.624 ' 00:38:26.624 10:49:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:26.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.624 --rc genhtml_branch_coverage=1 00:38:26.624 --rc genhtml_function_coverage=1 00:38:26.624 --rc genhtml_legend=1 00:38:26.624 --rc geninfo_all_blocks=1 00:38:26.624 --rc geninfo_unexecuted_blocks=1 00:38:26.624 00:38:26.624 ' 00:38:26.624 10:49:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:26.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.624 --rc genhtml_branch_coverage=1 00:38:26.624 --rc genhtml_function_coverage=1 00:38:26.624 --rc genhtml_legend=1 00:38:26.624 --rc geninfo_all_blocks=1 00:38:26.624 --rc geninfo_unexecuted_blocks=1 00:38:26.624 00:38:26.624 ' 00:38:26.624 10:49:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:26.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.624 --rc genhtml_branch_coverage=1 00:38:26.624 --rc genhtml_function_coverage=1 00:38:26.624 --rc genhtml_legend=1 00:38:26.624 --rc geninfo_all_blocks=1 00:38:26.625 --rc geninfo_unexecuted_blocks=1 00:38:26.625 00:38:26.625 ' 00:38:26.625 10:49:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:38:26.625 10:49:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2241833 00:38:26.625 10:49:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:38:26.625 10:49:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:38:26.625 10:49:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2241833 00:38:26.625 10:49:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2241833 ']' 00:38:26.625 10:49:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.625 10:49:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.625 10:49:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.625 10:49:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.625 10:49:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:38:26.625 [2024-12-09 10:49:27.603749] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:26.625 [2024-12-09 10:49:27.603826] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241833 ] 00:38:26.625 [2024-12-09 10:49:27.700151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:26.625 [2024-12-09 10:49:27.749517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.625 [2024-12-09 10:49:27.749605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.625 [2024-12-09 10:49:27.749704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:26.625 [2024-12-09 10:49:27.749707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:38:26.886 10:49:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 [2024-12-09 10:49:27.834473] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:38:26.886 [2024-12-09 10:49:27.834493] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:38:26.886 [2024-12-09 10:49:27.834505] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:38:26.886 [2024-12-09 10:49:27.834513] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:38:26.886 [2024-12-09 10:49:27.834521] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.886 10:49:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 [2024-12-09 10:49:27.919883] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.886 10:49:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.886 10:49:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 ************************************ 00:38:26.886 START TEST scheduler_create_thread 00:38:26.886 ************************************ 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 2 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 3 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 4 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 5 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 6 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 7 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 8 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:26.886 9 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.886 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:27.146 10 00:38:27.146 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.146 10:49:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:38:27.146 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.146 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:27.406 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.406 10:49:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:38:27.406 10:49:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:38:27.406 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.406 10:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:28.344 10:49:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.344 10:49:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:38:28.344 10:49:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.344 10:49:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:29.284 10:49:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.284 10:49:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:38:29.284 10:49:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:38:29.284 10:49:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.284 10:49:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:30.227 10:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.227 00:38:30.227 real 0m3.232s 00:38:30.227 user 0m0.026s 00:38:30.227 sys 0m0.008s 00:38:30.227 10:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.227 10:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:38:30.227 ************************************ 00:38:30.227 END TEST scheduler_create_thread 00:38:30.227 ************************************ 00:38:30.227 10:49:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:38:30.227 10:49:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2241833 00:38:30.227 10:49:31 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2241833 ']' 00:38:30.227 10:49:31 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2241833 00:38:30.227 10:49:31 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:38:30.227 10:49:31 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:30.227 10:49:31 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2241833 00:38:30.227 10:49:31 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:38:30.227 10:49:31 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:38:30.227 10:49:31 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2241833' 00:38:30.227 killing process with pid 2241833 00:38:30.227 10:49:31 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2241833 00:38:30.227 10:49:31 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2241833 00:38:30.488 [2024-12-09 10:49:31.572895] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:38:30.748 00:38:30.748 real 0m4.526s 00:38:30.748 user 0m7.832s 00:38:30.749 sys 0m0.467s 00:38:30.749 10:49:31 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.749 10:49:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:38:30.749 ************************************ 00:38:30.749 END TEST event_scheduler 00:38:30.749 ************************************ 00:38:30.749 10:49:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:38:30.749 10:49:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:38:30.749 10:49:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:30.749 10:49:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.749 10:49:31 event -- common/autotest_common.sh@10 -- # set +x 00:38:31.010 ************************************ 00:38:31.010 START TEST app_repeat 00:38:31.010 ************************************ 00:38:31.010 10:49:31 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2242427 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2242427' 00:38:31.010 Process app_repeat pid: 2242427 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:38:31.010 spdk_app_start Round 0 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2242427 /var/tmp/spdk-nbd.sock 00:38:31.010 10:49:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2242427 ']' 00:38:31.010 10:49:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:38:31.010 10:49:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:31.010 10:49:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:38:31.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:38:31.010 10:49:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:31.010 10:49:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:38:31.010 10:49:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:38:31.010 [2024-12-09 10:49:31.979089] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:31.010 [2024-12-09 10:49:31.979157] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242427 ] 00:38:31.010 [2024-12-09 10:49:32.104980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:31.010 [2024-12-09 10:49:32.161285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:31.010 [2024-12-09 10:49:32.161291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.271 10:49:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:31.271 10:49:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:38:31.271 10:49:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:38:31.532 Malloc0 00:38:31.532 10:49:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:38:31.793 Malloc1 00:38:31.793 10:49:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:31.793 10:49:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:38:32.053 /dev/nbd0 00:38:32.053 10:49:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:32.053 10:49:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:38:32.053 1+0 records in 00:38:32.053 1+0 records out 00:38:32.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243334 s, 16.8 MB/s 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:32.053 10:49:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:38:32.053 10:49:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:32.053 10:49:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:32.053 10:49:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:38:32.313 /dev/nbd1 00:38:32.313 10:49:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:32.313 10:49:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:38:32.313 1+0 records in 00:38:32.313 1+0 records out 00:38:32.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293833 s, 13.9 MB/s 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:32.313 10:49:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:38:32.313 10:49:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:32.313 10:49:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:32.313 10:49:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:32.313 10:49:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:32.313 10:49:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:32.573 10:49:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:38:32.573 { 00:38:32.573 "nbd_device": "/dev/nbd0", 00:38:32.573 "bdev_name": "Malloc0" 00:38:32.573 }, 00:38:32.573 { 00:38:32.573 "nbd_device": "/dev/nbd1", 00:38:32.573 "bdev_name": "Malloc1" 00:38:32.573 } 00:38:32.573 ]' 00:38:32.573 10:49:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:38:32.573 { 00:38:32.573 "nbd_device": "/dev/nbd0", 00:38:32.573 "bdev_name": "Malloc0" 00:38:32.573 }, 00:38:32.573 { 00:38:32.573 "nbd_device": "/dev/nbd1", 00:38:32.573 "bdev_name": "Malloc1" 00:38:32.573 } 00:38:32.573 ]' 00:38:32.573 10:49:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:38:32.834 /dev/nbd1' 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:38:32.834 /dev/nbd1' 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:38:32.834 256+0 records in 00:38:32.834 256+0 records out 00:38:32.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105465 s, 99.4 MB/s 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:38:32.834 256+0 records in 00:38:32.834 256+0 records out 00:38:32.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197959 s, 53.0 MB/s 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:38:32.834 256+0 records in 00:38:32.834 256+0 records out 00:38:32.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215831 s, 48.6 MB/s 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:32.834 10:49:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:33.094 10:49:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:33.094 10:49:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:33.094 10:49:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:33.094 10:49:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:33.094 10:49:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:33.094 10:49:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:33.094 10:49:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:38:33.094 10:49:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:38:33.094 10:49:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:33.094 10:49:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:33.355 10:49:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:33.615 10:49:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:38:33.615 10:49:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:33.615 10:49:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:38:33.875 10:49:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:38:33.875 10:49:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:38:33.875 10:49:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:33.875 10:49:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:38:33.875 10:49:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:38:33.875 10:49:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:38:33.875 10:49:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:38:33.875 10:49:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:38:33.875 10:49:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:38:33.875 10:49:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:38:34.139 10:49:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:38:34.400 [2024-12-09 10:49:35.382605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:34.400 [2024-12-09 10:49:35.435472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.400 [2024-12-09 10:49:35.435487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.400 [2024-12-09 10:49:35.487558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:38:34.400 [2024-12-09 10:49:35.487609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:38:37.697 10:49:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:38:37.697 10:49:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:38:37.697 spdk_app_start Round 1 00:38:37.697 10:49:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2242427 /var/tmp/spdk-nbd.sock 00:38:37.697 10:49:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2242427 ']' 00:38:37.697 10:49:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:38:37.697 10:49:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:37.697 10:49:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:38:37.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:38:37.697 10:49:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:37.697 10:49:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:38:37.698 10:49:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:37.698 10:49:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:38:37.698 10:49:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:38:37.698 Malloc0 00:38:37.698 10:49:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:38:37.958 Malloc1 00:38:37.958 10:49:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:37.958 10:49:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:38:38.218 /dev/nbd0 00:38:38.218 10:49:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:38.218 10:49:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:38:38.218 1+0 records in 00:38:38.218 1+0 records out 00:38:38.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023882 s, 17.2 MB/s 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:38.218 10:49:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:38:38.218 10:49:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:38.218 10:49:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:38.218 10:49:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:38:38.478 /dev/nbd1 00:38:38.478 10:49:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:38.478 10:49:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:38:38.478 1+0 records in 00:38:38.478 1+0 records out 00:38:38.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231675 s, 17.7 MB/s 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:38.478 10:49:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:38:38.478 10:49:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:38.478 10:49:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:38.478 10:49:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:38.478 10:49:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:38.478 10:49:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:38:39.048 { 00:38:39.048 "nbd_device": "/dev/nbd0", 00:38:39.048 "bdev_name": "Malloc0" 00:38:39.048 }, 00:38:39.048 { 00:38:39.048 "nbd_device": "/dev/nbd1", 00:38:39.048 "bdev_name": "Malloc1" 00:38:39.048 } 00:38:39.048 ]' 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:38:39.048 { 00:38:39.048 "nbd_device": "/dev/nbd0", 00:38:39.048 "bdev_name": "Malloc0" 00:38:39.048 }, 00:38:39.048 { 00:38:39.048 "nbd_device": "/dev/nbd1", 00:38:39.048 "bdev_name": "Malloc1" 00:38:39.048 } 00:38:39.048 ]' 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:38:39.048 /dev/nbd1' 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:38:39.048 /dev/nbd1' 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:38:39.048 256+0 records in 00:38:39.048 256+0 records out 00:38:39.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00935205 s, 112 MB/s 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:39.048 10:49:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:38:39.048 256+0 records in 00:38:39.048 256+0 records out 00:38:39.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223482 s, 46.9 MB/s 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:38:39.048 256+0 records in 00:38:39.048 256+0 records out 00:38:39.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214247 s, 48.9 MB/s 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:39.048 10:49:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:39.308 10:49:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:39.308 10:49:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:39.308 10:49:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:39.308 10:49:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:39.308 10:49:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:39.308 10:49:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:39.308 10:49:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:38:39.308 10:49:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:38:39.308 10:49:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:39.308 10:49:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:39.568 10:49:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:39.828 10:49:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:38:39.828 10:49:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:38:39.828 10:49:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:39.828 10:49:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:38:39.828 10:49:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:40.087 10:49:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:38:40.087 10:49:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:38:40.087 10:49:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:38:40.087 10:49:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:38:40.087 10:49:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:38:40.087 10:49:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:38:40.087 10:49:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:38:40.087 10:49:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:38:40.347 10:49:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:38:40.347 [2024-12-09 10:49:41.494890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:40.607 [2024-12-09 10:49:41.547979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:40.607 [2024-12-09 10:49:41.547985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.607 [2024-12-09 10:49:41.600991] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:38:40.607 [2024-12-09 10:49:41.601044] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:38:43.148 10:49:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:38:43.148 10:49:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:38:43.148 spdk_app_start Round 2 00:38:43.148 10:49:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2242427 /var/tmp/spdk-nbd.sock 00:38:43.148 10:49:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2242427 ']' 00:38:43.148 10:49:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:38:43.148 10:49:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:43.148 10:49:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:38:43.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:38:43.148 10:49:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:43.148 10:49:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:38:43.409 10:49:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:43.409 10:49:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:38:43.409 10:49:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:38:43.980 Malloc0 00:38:43.980 10:49:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:38:43.980 Malloc1 00:38:43.980 10:49:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:43.980 10:49:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:38:44.239 /dev/nbd0 00:38:44.239 10:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:44.239 10:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:44.239 10:49:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:44.239 10:49:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:38:44.239 10:49:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:38:44.240 1+0 records in 00:38:44.240 1+0 records out 00:38:44.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230128 s, 17.8 MB/s 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:44.240 10:49:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:38:44.240 10:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:44.240 10:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:44.240 10:49:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:38:44.809 /dev/nbd1 00:38:44.809 10:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:44.809 10:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:38:44.809 1+0 records in 00:38:44.809 1+0 records out 00:38:44.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297309 s, 13.8 MB/s 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:44.809 10:49:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:38:44.809 10:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:44.809 10:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:44.809 10:49:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:44.809 10:49:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:44.809 10:49:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:38:45.078 { 00:38:45.078 "nbd_device": "/dev/nbd0", 00:38:45.078 "bdev_name": "Malloc0" 00:38:45.078 }, 00:38:45.078 { 00:38:45.078 "nbd_device": "/dev/nbd1", 00:38:45.078 "bdev_name": "Malloc1" 00:38:45.078 } 00:38:45.078 ]' 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:38:45.078 { 00:38:45.078 "nbd_device": "/dev/nbd0", 00:38:45.078 "bdev_name": "Malloc0" 00:38:45.078 }, 00:38:45.078 { 00:38:45.078 "nbd_device": "/dev/nbd1", 00:38:45.078 "bdev_name": "Malloc1" 00:38:45.078 } 00:38:45.078 ]' 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:38:45.078 /dev/nbd1' 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:38:45.078 /dev/nbd1' 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:38:45.078 256+0 records in 00:38:45.078 256+0 records out 00:38:45.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119941 s, 87.4 MB/s 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:38:45.078 256+0 records in 00:38:45.078 256+0 records out 00:38:45.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203699 s, 51.5 MB/s 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:38:45.078 256+0 records in 00:38:45.078 256+0 records out 00:38:45.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218357 s, 48.0 MB/s 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:45.078 10:49:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:45.352 10:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:45.352 10:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:45.352 10:49:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:45.352 10:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:45.352 10:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:45.352 10:49:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:45.352 10:49:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:38:45.352 10:49:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:38:45.352 10:49:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:45.352 10:49:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:45.639 10:49:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:38:45.989 10:49:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:38:45.989 10:49:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:38:46.284 10:49:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:38:46.579 [2024-12-09 10:49:47.612226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:46.579 [2024-12-09 10:49:47.664693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:46.579 [2024-12-09 10:49:47.664699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.579 [2024-12-09 10:49:47.716523] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:38:46.579 [2024-12-09 10:49:47.716575] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:38:49.969 10:49:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2242427 /var/tmp/spdk-nbd.sock 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2242427 ']' 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:38:49.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:38:49.969 10:49:50 event.app_repeat -- event/event.sh@39 -- # killprocess 2242427 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2242427 ']' 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2242427 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2242427 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2242427' 00:38:49.969 killing process with pid 2242427 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2242427 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2242427 00:38:49.969 spdk_app_start is called in Round 0. 00:38:49.969 Shutdown signal received, stop current app iteration 00:38:49.969 Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 reinitialization... 00:38:49.969 spdk_app_start is called in Round 1. 00:38:49.969 Shutdown signal received, stop current app iteration 00:38:49.969 Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 reinitialization... 00:38:49.969 spdk_app_start is called in Round 2. 00:38:49.969 Shutdown signal received, stop current app iteration 00:38:49.969 Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 reinitialization... 00:38:49.969 spdk_app_start is called in Round 3. 00:38:49.969 Shutdown signal received, stop current app iteration 00:38:49.969 10:49:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:38:49.969 10:49:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:38:49.969 00:38:49.969 real 0m18.989s 00:38:49.969 user 0m42.296s 00:38:49.969 sys 0m3.659s 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:49.969 10:49:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:38:49.969 ************************************ 00:38:49.969 END TEST app_repeat 00:38:49.969 ************************************ 00:38:49.969 10:49:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:38:49.969 10:49:50 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:38:49.969 10:49:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:49.969 10:49:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:49.969 10:49:50 event -- common/autotest_common.sh@10 -- # set +x 00:38:49.969 ************************************ 00:38:49.969 START TEST cpu_locks 00:38:49.969 ************************************ 00:38:49.969 10:49:51 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:38:49.969 * Looking for test storage... 00:38:49.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:38:49.969 10:49:51 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:49.969 10:49:51 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:38:49.969 10:49:51 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:50.229 10:49:51 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.229 10:49:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:38:50.229 10:49:51 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.229 10:49:51 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.229 --rc genhtml_branch_coverage=1 00:38:50.229 --rc genhtml_function_coverage=1 00:38:50.229 --rc genhtml_legend=1 00:38:50.229 --rc geninfo_all_blocks=1 00:38:50.229 --rc geninfo_unexecuted_blocks=1 00:38:50.229 00:38:50.229 ' 00:38:50.229 10:49:51 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.229 --rc genhtml_branch_coverage=1 00:38:50.229 --rc genhtml_function_coverage=1 00:38:50.229 --rc genhtml_legend=1 00:38:50.229 --rc geninfo_all_blocks=1 00:38:50.229 --rc geninfo_unexecuted_blocks=1 00:38:50.229 00:38:50.229 ' 00:38:50.229 10:49:51 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.229 --rc genhtml_branch_coverage=1 00:38:50.229 --rc genhtml_function_coverage=1 00:38:50.229 --rc genhtml_legend=1 00:38:50.229 --rc geninfo_all_blocks=1 00:38:50.229 --rc geninfo_unexecuted_blocks=1 00:38:50.229 00:38:50.229 ' 00:38:50.229 10:49:51 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.229 --rc genhtml_branch_coverage=1 00:38:50.229 --rc genhtml_function_coverage=1 00:38:50.229 --rc genhtml_legend=1 00:38:50.229 --rc geninfo_all_blocks=1 00:38:50.229 --rc geninfo_unexecuted_blocks=1 00:38:50.229 00:38:50.229 ' 00:38:50.229 10:49:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:38:50.229 10:49:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:38:50.229 10:49:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:38:50.229 10:49:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:38:50.229 10:49:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:50.229 10:49:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:50.229 10:49:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:38:50.229 ************************************ 00:38:50.229 START TEST default_locks 00:38:50.229 ************************************ 00:38:50.229 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:38:50.229 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2245166 00:38:50.229 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:38:50.229 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2245166 00:38:50.229 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2245166 ']' 00:38:50.229 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.229 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.229 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.229 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.229 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:38:50.229 [2024-12-09 10:49:51.291546] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:50.229 [2024-12-09 10:49:51.291602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245166 ] 00:38:50.489 [2024-12-09 10:49:51.404233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.489 [2024-12-09 10:49:51.459506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.748 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.748 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:38:50.748 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2245166 00:38:50.748 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2245166 00:38:50.748 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:38:51.008 lslocks: write error 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2245166 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2245166 ']' 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2245166 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2245166 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2245166' 00:38:51.008 killing process with pid 2245166 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2245166 00:38:51.008 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2245166 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2245166 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2245166 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2245166 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2245166 ']' 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:51.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:38:51.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2245166) - No such process 00:38:51.578 ERROR: process (pid: 2245166) is no longer running 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:38:51.578 00:38:51.578 real 0m1.252s 00:38:51.578 user 0m1.254s 00:38:51.578 sys 0m0.500s 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.578 10:49:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:38:51.578 ************************************ 00:38:51.578 END TEST default_locks 00:38:51.578 ************************************ 00:38:51.578 10:49:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:38:51.578 10:49:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:51.578 10:49:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:51.578 10:49:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:38:51.578 ************************************ 00:38:51.578 START TEST default_locks_via_rpc 00:38:51.578 ************************************ 00:38:51.578 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:38:51.578 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2245384 00:38:51.578 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2245384 00:38:51.578 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2245384 ']' 00:38:51.578 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:51.578 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:51.578 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:51.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:51.578 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:51.578 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:51.578 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:38:51.578 [2024-12-09 10:49:52.648896] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:51.578 [2024-12-09 10:49:52.648968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245384 ] 00:38:51.838 [2024-12-09 10:49:52.774732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:51.838 [2024-12-09 10:49:52.827399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2245384 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2245384 00:38:52.098 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2245384 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2245384 ']' 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2245384 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2245384 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2245384' 00:38:52.668 killing process with pid 2245384 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2245384 00:38:52.668 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2245384 00:38:53.237 00:38:53.237 real 0m1.557s 00:38:53.237 user 0m1.574s 00:38:53.237 sys 0m0.675s 00:38:53.237 10:49:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.237 10:49:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:53.237 ************************************ 00:38:53.237 END TEST default_locks_via_rpc 00:38:53.237 ************************************ 00:38:53.237 10:49:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:38:53.237 10:49:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:53.237 10:49:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:53.237 10:49:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:38:53.237 ************************************ 00:38:53.237 START TEST non_locking_app_on_locked_coremask 00:38:53.237 ************************************ 00:38:53.237 10:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:38:53.237 10:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2245694 00:38:53.237 10:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2245694 /var/tmp/spdk.sock 00:38:53.237 10:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2245694 ']' 00:38:53.237 10:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:53.237 10:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:53.237 10:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:53.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:53.237 10:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:53.237 10:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:53.237 10:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:38:53.237 [2024-12-09 10:49:54.291138] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:53.237 [2024-12-09 10:49:54.291219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245694 ] 00:38:53.497 [2024-12-09 10:49:54.418809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.497 [2024-12-09 10:49:54.472291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2245765 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2245765 /var/tmp/spdk2.sock 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2245765 ']' 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:38:54.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:54.141 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:54.141 [2024-12-09 10:49:55.127658] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:54.141 [2024-12-09 10:49:55.127716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245765 ] 00:38:54.141 [2024-12-09 10:49:55.290139] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:38:54.141 [2024-12-09 10:49:55.290181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.400 [2024-12-09 10:49:55.410332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.335 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:55.335 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:38:55.335 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2245694 00:38:55.335 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2245694 00:38:55.335 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:38:56.710 lslocks: write error 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2245694 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2245694 ']' 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2245694 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2245694 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2245694' 00:38:56.710 killing process with pid 2245694 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2245694 00:38:56.710 10:49:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2245694 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2245765 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2245765 ']' 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2245765 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2245765 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2245765' 00:38:57.276 killing process with pid 2245765 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2245765 00:38:57.276 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2245765 00:38:57.843 00:38:57.843 real 0m4.616s 00:38:57.843 user 0m5.057s 00:38:57.843 sys 0m1.459s 00:38:57.843 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:57.843 10:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:57.843 ************************************ 00:38:57.843 END TEST non_locking_app_on_locked_coremask 00:38:57.843 ************************************ 00:38:57.843 10:49:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:38:57.843 10:49:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:57.843 10:49:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:57.843 10:49:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:38:57.843 ************************************ 00:38:57.843 START TEST locking_app_on_unlocked_coremask 00:38:57.843 ************************************ 00:38:57.843 10:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:38:57.843 10:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2246332 00:38:57.843 10:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2246332 /var/tmp/spdk.sock 00:38:57.843 10:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:38:57.843 10:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2246332 ']' 00:38:57.843 10:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:57.844 10:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:57.844 10:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:57.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:57.844 10:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:57.844 10:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:57.844 [2024-12-09 10:49:58.991104] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:57.844 [2024-12-09 10:49:58.991155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246332 ] 00:38:58.102 [2024-12-09 10:49:59.101284] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:38:58.102 [2024-12-09 10:49:59.101331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.102 [2024-12-09 10:49:59.157053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2246509 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2246509 /var/tmp/spdk2.sock 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2246509 ']' 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:38:59.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:59.038 10:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:38:59.038 [2024-12-09 10:49:59.960616] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:38:59.038 [2024-12-09 10:49:59.960707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246509 ] 00:38:59.038 [2024-12-09 10:50:00.146460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.296 [2024-12-09 10:50:00.251513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.864 10:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.864 10:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:38:59.864 10:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2246509 00:38:59.864 10:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2246509 00:38:59.864 10:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:39:01.239 lslocks: write error 00:39:01.239 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2246332 00:39:01.239 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2246332 ']' 00:39:01.239 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2246332 00:39:01.239 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:39:01.239 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:01.239 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2246332 00:39:01.239 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:01.240 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:01.240 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2246332' 00:39:01.240 killing process with pid 2246332 00:39:01.240 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2246332 00:39:01.240 10:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2246332 00:39:02.176 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2246509 00:39:02.176 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2246509 ']' 00:39:02.176 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2246509 00:39:02.176 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:39:02.176 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:02.176 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2246509 00:39:02.176 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:02.176 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:02.176 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2246509' 00:39:02.176 killing process with pid 2246509 00:39:02.177 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2246509 00:39:02.177 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2246509 00:39:02.744 00:39:02.744 real 0m4.756s 00:39:02.744 user 0m5.302s 00:39:02.744 sys 0m1.549s 00:39:02.744 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:02.744 10:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:02.744 ************************************ 00:39:02.744 END TEST locking_app_on_unlocked_coremask 00:39:02.744 ************************************ 00:39:02.744 10:50:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:39:02.744 10:50:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:02.744 10:50:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:02.744 10:50:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:39:02.744 ************************************ 00:39:02.745 START TEST locking_app_on_locked_coremask 00:39:02.745 ************************************ 00:39:02.745 10:50:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:39:02.745 10:50:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2247007 00:39:02.745 10:50:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2247007 /var/tmp/spdk.sock 00:39:02.745 10:50:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:39:02.745 10:50:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2247007 ']' 00:39:02.745 10:50:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.745 10:50:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:02.745 10:50:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.745 10:50:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:02.745 10:50:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:02.745 [2024-12-09 10:50:03.841505] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:02.745 [2024-12-09 10:50:03.841580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247007 ] 00:39:03.003 [2024-12-09 10:50:03.966618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.003 [2024-12-09 10:50:04.017705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2247080 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2247080 /var/tmp/spdk2.sock 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2247080 /var/tmp/spdk2.sock 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2247080 /var/tmp/spdk2.sock 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2247080 ']' 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:39:03.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:39:03.262 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:03.263 10:50:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:03.263 [2024-12-09 10:50:04.323603] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:03.263 [2024-12-09 10:50:04.323681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247080 ] 00:39:03.521 [2024-12-09 10:50:04.485160] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2247007 has claimed it. 00:39:03.521 [2024-12-09 10:50:04.485221] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:39:04.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2247080) - No such process 00:39:04.089 ERROR: process (pid: 2247080) is no longer running 00:39:04.089 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:04.089 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:39:04.089 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:39:04.089 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:04.089 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:04.089 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:04.089 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2247007 00:39:04.089 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2247007 00:39:04.089 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:39:04.657 lslocks: write error 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2247007 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2247007 ']' 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2247007 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2247007 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2247007' 00:39:04.657 killing process with pid 2247007 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2247007 00:39:04.657 10:50:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2247007 00:39:05.225 00:39:05.225 real 0m2.369s 00:39:05.225 user 0m2.614s 00:39:05.225 sys 0m0.866s 00:39:05.225 10:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:05.225 10:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:05.225 ************************************ 00:39:05.225 END TEST locking_app_on_locked_coremask 00:39:05.225 ************************************ 00:39:05.225 10:50:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:39:05.225 10:50:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:05.225 10:50:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:05.225 10:50:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:39:05.225 ************************************ 00:39:05.225 START TEST locking_overlapped_coremask 00:39:05.225 ************************************ 00:39:05.225 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:39:05.225 10:50:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2247298 00:39:05.225 10:50:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2247298 /var/tmp/spdk.sock 00:39:05.225 10:50:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:39:05.225 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2247298 ']' 00:39:05.225 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:05.225 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:05.225 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:05.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:05.225 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:05.225 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:05.225 [2024-12-09 10:50:06.288687] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:05.225 [2024-12-09 10:50:06.288743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247298 ] 00:39:05.225 [2024-12-09 10:50:06.397394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:05.485 [2024-12-09 10:50:06.452305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:05.485 [2024-12-09 10:50:06.452395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:05.485 [2024-12-09 10:50:06.452400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2247465 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2247465 /var/tmp/spdk2.sock 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2247465 /var/tmp/spdk2.sock 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2247465 /var/tmp/spdk2.sock 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2247465 ']' 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:39:05.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:05.745 10:50:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:05.745 [2024-12-09 10:50:06.754006] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:05.745 [2024-12-09 10:50:06.754074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247465 ] 00:39:05.745 [2024-12-09 10:50:06.876954] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2247298 has claimed it. 00:39:05.745 [2024-12-09 10:50:06.877000] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:39:06.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2247465) - No such process 00:39:06.317 ERROR: process (pid: 2247465) is no longer running 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2247298 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2247298 ']' 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2247298 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2247298 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2247298' 00:39:06.317 killing process with pid 2247298 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2247298 00:39:06.317 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2247298 00:39:06.889 00:39:06.889 real 0m1.668s 00:39:06.889 user 0m4.438s 00:39:06.889 sys 0m0.475s 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:06.889 ************************************ 00:39:06.889 END TEST locking_overlapped_coremask 00:39:06.889 ************************************ 00:39:06.889 10:50:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:39:06.889 10:50:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:06.889 10:50:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:06.889 10:50:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:39:06.889 ************************************ 00:39:06.889 START TEST locking_overlapped_coremask_via_rpc 00:39:06.889 ************************************ 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2247677 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2247677 /var/tmp/spdk.sock 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2247677 ']' 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.889 10:50:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:39:06.890 [2024-12-09 10:50:08.037722] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:06.890 [2024-12-09 10:50:08.037797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247677 ] 00:39:07.150 [2024-12-09 10:50:08.162485] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:39:07.150 [2024-12-09 10:50:08.162525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:07.150 [2024-12-09 10:50:08.221355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:07.150 [2024-12-09 10:50:08.221444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:07.150 [2024-12-09 10:50:08.221449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2247686 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2247686 /var/tmp/spdk2.sock 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2247686 ']' 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:39:07.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:07.410 10:50:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:07.410 [2024-12-09 10:50:08.542708] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:07.410 [2024-12-09 10:50:08.542770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247686 ] 00:39:07.670 [2024-12-09 10:50:08.665860] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:39:07.670 [2024-12-09 10:50:08.665890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:07.670 [2024-12-09 10:50:08.758867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:07.670 [2024-12-09 10:50:08.758962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:07.670 [2024-12-09 10:50:08.758964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:08.610 [2024-12-09 10:50:09.580717] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2247677 has claimed it. 00:39:08.610 request: 00:39:08.610 { 00:39:08.610 "method": "framework_enable_cpumask_locks", 00:39:08.610 "req_id": 1 00:39:08.610 } 00:39:08.610 Got JSON-RPC error response 00:39:08.610 response: 00:39:08.610 { 00:39:08.610 "code": -32603, 00:39:08.610 "message": "Failed to claim CPU core: 2" 00:39:08.610 } 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2247677 /var/tmp/spdk.sock 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2247677 ']' 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.610 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:08.870 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.870 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:39:08.870 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2247686 /var/tmp/spdk2.sock 00:39:08.870 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2247686 ']' 00:39:08.870 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:39:08.870 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.870 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:39:08.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:39:08.870 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.870 10:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:09.130 10:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:09.130 10:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:39:09.130 10:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:39:09.130 10:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:39:09.130 10:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:39:09.130 10:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:39:09.130 00:39:09.130 real 0m2.202s 00:39:09.130 user 0m1.188s 00:39:09.130 sys 0m0.218s 00:39:09.130 10:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:09.130 10:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:09.130 ************************************ 00:39:09.130 END TEST locking_overlapped_coremask_via_rpc 00:39:09.130 ************************************ 00:39:09.130 10:50:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:39:09.130 10:50:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2247677 ]] 00:39:09.130 10:50:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2247677 00:39:09.130 10:50:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2247677 ']' 00:39:09.130 10:50:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2247677 00:39:09.130 10:50:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:39:09.130 10:50:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:09.130 10:50:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2247677 00:39:09.130 10:50:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:09.130 10:50:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:09.130 10:50:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2247677' 00:39:09.130 killing process with pid 2247677 00:39:09.130 10:50:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2247677 00:39:09.130 10:50:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2247677 00:39:09.701 10:50:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2247686 ]] 00:39:09.701 10:50:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2247686 00:39:09.701 10:50:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2247686 ']' 00:39:09.701 10:50:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2247686 00:39:09.701 10:50:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:39:09.701 10:50:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:09.701 10:50:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2247686 00:39:09.701 10:50:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:39:09.701 10:50:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:39:09.701 10:50:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2247686' 00:39:09.701 killing process with pid 2247686 00:39:09.701 10:50:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2247686 00:39:09.701 10:50:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2247686 00:39:10.272 10:50:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:39:10.272 10:50:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:39:10.272 10:50:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2247677 ]] 00:39:10.272 10:50:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2247677 00:39:10.272 10:50:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2247677 ']' 00:39:10.272 10:50:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2247677 00:39:10.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2247677) - No such process 00:39:10.272 10:50:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2247677 is not found' 00:39:10.272 Process with pid 2247677 is not found 00:39:10.272 10:50:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2247686 ]] 00:39:10.272 10:50:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2247686 00:39:10.272 10:50:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2247686 ']' 00:39:10.272 10:50:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2247686 00:39:10.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2247686) - No such process 00:39:10.272 10:50:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2247686 is not found' 00:39:10.272 Process with pid 2247686 is not found 00:39:10.272 10:50:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:39:10.272 00:39:10.272 real 0m20.137s 00:39:10.272 user 0m33.909s 00:39:10.272 sys 0m6.879s 00:39:10.272 10:50:11 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:10.272 10:50:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:39:10.272 ************************************ 00:39:10.272 END TEST cpu_locks 00:39:10.272 ************************************ 00:39:10.272 00:39:10.272 real 0m48.273s 00:39:10.272 user 1m30.936s 00:39:10.272 sys 0m11.761s 00:39:10.272 10:50:11 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:10.272 10:50:11 event -- common/autotest_common.sh@10 -- # set +x 00:39:10.272 ************************************ 00:39:10.272 END TEST event 00:39:10.272 ************************************ 00:39:10.272 10:50:11 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:39:10.272 10:50:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:10.272 10:50:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:10.272 10:50:11 -- common/autotest_common.sh@10 -- # set +x 00:39:10.272 ************************************ 00:39:10.272 START TEST thread 00:39:10.272 ************************************ 00:39:10.272 10:50:11 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:39:10.272 * Looking for test storage... 00:39:10.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:39:10.272 10:50:11 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:10.272 10:50:11 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:39:10.272 10:50:11 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:10.533 10:50:11 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:10.533 10:50:11 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:10.533 10:50:11 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:10.533 10:50:11 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:10.534 10:50:11 thread -- scripts/common.sh@336 -- # IFS=.-: 00:39:10.534 10:50:11 thread -- scripts/common.sh@336 -- # read -ra ver1 00:39:10.534 10:50:11 thread -- scripts/common.sh@337 -- # IFS=.-: 00:39:10.534 10:50:11 thread -- scripts/common.sh@337 -- # read -ra ver2 00:39:10.534 10:50:11 thread -- scripts/common.sh@338 -- # local 'op=<' 00:39:10.534 10:50:11 thread -- scripts/common.sh@340 -- # ver1_l=2 00:39:10.534 10:50:11 thread -- scripts/common.sh@341 -- # ver2_l=1 00:39:10.534 10:50:11 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:10.534 10:50:11 thread -- scripts/common.sh@344 -- # case "$op" in 00:39:10.534 10:50:11 thread -- scripts/common.sh@345 -- # : 1 00:39:10.534 10:50:11 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:10.534 10:50:11 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:10.534 10:50:11 thread -- scripts/common.sh@365 -- # decimal 1 00:39:10.534 10:50:11 thread -- scripts/common.sh@353 -- # local d=1 00:39:10.534 10:50:11 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:10.534 10:50:11 thread -- scripts/common.sh@355 -- # echo 1 00:39:10.534 10:50:11 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:39:10.534 10:50:11 thread -- scripts/common.sh@366 -- # decimal 2 00:39:10.534 10:50:11 thread -- scripts/common.sh@353 -- # local d=2 00:39:10.534 10:50:11 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:10.534 10:50:11 thread -- scripts/common.sh@355 -- # echo 2 00:39:10.534 10:50:11 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:39:10.534 10:50:11 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:10.534 10:50:11 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:10.534 10:50:11 thread -- scripts/common.sh@368 -- # return 0 00:39:10.534 10:50:11 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:10.534 10:50:11 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:10.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.534 --rc genhtml_branch_coverage=1 00:39:10.534 --rc genhtml_function_coverage=1 00:39:10.534 --rc genhtml_legend=1 00:39:10.534 --rc geninfo_all_blocks=1 00:39:10.534 --rc geninfo_unexecuted_blocks=1 00:39:10.534 00:39:10.534 ' 00:39:10.534 10:50:11 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:10.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.534 --rc genhtml_branch_coverage=1 00:39:10.534 --rc genhtml_function_coverage=1 00:39:10.534 --rc genhtml_legend=1 00:39:10.534 --rc geninfo_all_blocks=1 00:39:10.534 --rc geninfo_unexecuted_blocks=1 00:39:10.534 00:39:10.534 ' 00:39:10.534 10:50:11 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:10.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.534 --rc genhtml_branch_coverage=1 00:39:10.534 --rc genhtml_function_coverage=1 00:39:10.534 --rc genhtml_legend=1 00:39:10.534 --rc geninfo_all_blocks=1 00:39:10.534 --rc geninfo_unexecuted_blocks=1 00:39:10.534 00:39:10.534 ' 00:39:10.534 10:50:11 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:10.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.534 --rc genhtml_branch_coverage=1 00:39:10.534 --rc genhtml_function_coverage=1 00:39:10.534 --rc genhtml_legend=1 00:39:10.534 --rc geninfo_all_blocks=1 00:39:10.534 --rc geninfo_unexecuted_blocks=1 00:39:10.534 00:39:10.534 ' 00:39:10.534 10:50:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:39:10.534 10:50:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:39:10.534 10:50:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:10.534 10:50:11 thread -- common/autotest_common.sh@10 -- # set +x 00:39:10.534 ************************************ 00:39:10.534 START TEST thread_poller_perf 00:39:10.534 ************************************ 00:39:10.534 10:50:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:39:10.534 [2024-12-09 10:50:11.551965] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:10.534 [2024-12-09 10:50:11.552043] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248153 ] 00:39:10.534 [2024-12-09 10:50:11.664716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.795 [2024-12-09 10:50:11.719185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.795 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:39:11.734 [2024-12-09T09:50:12.911Z] ====================================== 00:39:11.735 [2024-12-09T09:50:12.911Z] busy:2310039846 (cyc) 00:39:11.735 [2024-12-09T09:50:12.911Z] total_run_count: 267000 00:39:11.735 [2024-12-09T09:50:12.911Z] tsc_hz: 2300000000 (cyc) 00:39:11.735 [2024-12-09T09:50:12.911Z] ====================================== 00:39:11.735 [2024-12-09T09:50:12.911Z] poller_cost: 8651 (cyc), 3761 (nsec) 00:39:11.735 00:39:11.735 real 0m1.309s 00:39:11.735 user 0m1.207s 00:39:11.735 sys 0m0.096s 00:39:11.735 10:50:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:11.735 10:50:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:39:11.735 ************************************ 00:39:11.735 END TEST thread_poller_perf 00:39:11.735 ************************************ 00:39:11.735 10:50:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:39:11.735 10:50:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:39:11.735 10:50:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:11.735 10:50:12 thread -- common/autotest_common.sh@10 -- # set +x 00:39:11.995 ************************************ 00:39:11.995 START TEST thread_poller_perf 00:39:11.995 ************************************ 00:39:11.995 10:50:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:39:11.995 [2024-12-09 10:50:12.951046] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:11.995 [2024-12-09 10:50:12.951147] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248347 ] 00:39:11.995 [2024-12-09 10:50:13.078082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.995 [2024-12-09 10:50:13.134025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.995 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:39:13.377 [2024-12-09T09:50:14.553Z] ====================================== 00:39:13.377 [2024-12-09T09:50:14.553Z] busy:2302587328 (cyc) 00:39:13.377 [2024-12-09T09:50:14.553Z] total_run_count: 3288000 00:39:13.377 [2024-12-09T09:50:14.553Z] tsc_hz: 2300000000 (cyc) 00:39:13.377 [2024-12-09T09:50:14.553Z] ====================================== 00:39:13.377 [2024-12-09T09:50:14.553Z] poller_cost: 700 (cyc), 304 (nsec) 00:39:13.377 00:39:13.377 real 0m1.320s 00:39:13.377 user 0m1.197s 00:39:13.377 sys 0m0.116s 00:39:13.377 10:50:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:13.377 10:50:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:39:13.377 ************************************ 00:39:13.377 END TEST thread_poller_perf 00:39:13.377 ************************************ 00:39:13.377 10:50:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:39:13.377 00:39:13.377 real 0m3.020s 00:39:13.377 user 0m2.604s 00:39:13.377 sys 0m0.434s 00:39:13.377 10:50:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:13.377 10:50:14 thread -- common/autotest_common.sh@10 -- # set +x 00:39:13.377 ************************************ 00:39:13.377 END TEST thread 00:39:13.377 ************************************ 00:39:13.377 10:50:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:39:13.377 10:50:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:39:13.377 10:50:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:13.377 10:50:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:13.377 10:50:14 -- common/autotest_common.sh@10 -- # set +x 00:39:13.377 ************************************ 00:39:13.377 START TEST app_cmdline 00:39:13.377 ************************************ 00:39:13.378 10:50:14 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:39:13.378 * Looking for test storage... 00:39:13.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:39:13.378 10:50:14 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:13.378 10:50:14 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:39:13.378 10:50:14 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:13.639 10:50:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:13.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.639 --rc genhtml_branch_coverage=1 00:39:13.639 --rc genhtml_function_coverage=1 00:39:13.639 --rc genhtml_legend=1 00:39:13.639 --rc geninfo_all_blocks=1 00:39:13.639 --rc geninfo_unexecuted_blocks=1 00:39:13.639 00:39:13.639 ' 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:13.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.639 --rc genhtml_branch_coverage=1 00:39:13.639 --rc genhtml_function_coverage=1 00:39:13.639 --rc genhtml_legend=1 00:39:13.639 --rc geninfo_all_blocks=1 00:39:13.639 --rc geninfo_unexecuted_blocks=1 00:39:13.639 00:39:13.639 ' 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:13.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.639 --rc genhtml_branch_coverage=1 00:39:13.639 --rc genhtml_function_coverage=1 00:39:13.639 --rc genhtml_legend=1 00:39:13.639 --rc geninfo_all_blocks=1 00:39:13.639 --rc geninfo_unexecuted_blocks=1 00:39:13.639 00:39:13.639 ' 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:13.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.639 --rc genhtml_branch_coverage=1 00:39:13.639 --rc genhtml_function_coverage=1 00:39:13.639 --rc genhtml_legend=1 00:39:13.639 --rc geninfo_all_blocks=1 00:39:13.639 --rc geninfo_unexecuted_blocks=1 00:39:13.639 00:39:13.639 ' 00:39:13.639 10:50:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:39:13.639 10:50:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2248683 00:39:13.639 10:50:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2248683 00:39:13.639 10:50:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2248683 ']' 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:13.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:13.639 10:50:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:39:13.639 [2024-12-09 10:50:14.671716] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:13.639 [2024-12-09 10:50:14.671791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248683 ] 00:39:13.639 [2024-12-09 10:50:14.799079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.900 [2024-12-09 10:50:14.851605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.158 10:50:15 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:14.158 10:50:15 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:39:14.158 10:50:15 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:39:14.419 { 00:39:14.419 "version": "SPDK v25.01-pre git sha1 b920049a1", 00:39:14.419 "fields": { 00:39:14.419 "major": 25, 00:39:14.419 "minor": 1, 00:39:14.419 "patch": 0, 00:39:14.419 "suffix": "-pre", 00:39:14.419 "commit": "b920049a1" 00:39:14.419 } 00:39:14.419 } 00:39:14.419 10:50:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:39:14.419 10:50:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:39:14.419 10:50:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:39:14.419 10:50:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:39:14.419 10:50:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:39:14.419 10:50:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:39:14.419 10:50:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.419 10:50:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:39:14.419 10:50:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:39:14.419 10:50:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:14.419 10:50:15 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:39:14.680 request: 00:39:14.680 { 00:39:14.680 "method": "env_dpdk_get_mem_stats", 00:39:14.680 "req_id": 1 00:39:14.680 } 00:39:14.680 Got JSON-RPC error response 00:39:14.680 response: 00:39:14.680 { 00:39:14.680 "code": -32601, 00:39:14.680 "message": "Method not found" 00:39:14.680 } 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:14.680 10:50:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2248683 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2248683 ']' 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2248683 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2248683 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2248683' 00:39:14.680 killing process with pid 2248683 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@973 -- # kill 2248683 00:39:14.680 10:50:15 app_cmdline -- common/autotest_common.sh@978 -- # wait 2248683 00:39:15.251 00:39:15.251 real 0m1.779s 00:39:15.251 user 0m2.116s 00:39:15.251 sys 0m0.603s 00:39:15.251 10:50:16 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:15.251 10:50:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:39:15.251 ************************************ 00:39:15.251 END TEST app_cmdline 00:39:15.251 ************************************ 00:39:15.251 10:50:16 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:39:15.251 10:50:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:15.251 10:50:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:15.251 10:50:16 -- common/autotest_common.sh@10 -- # set +x 00:39:15.251 ************************************ 00:39:15.251 START TEST version 00:39:15.251 ************************************ 00:39:15.251 10:50:16 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:39:15.251 * Looking for test storage... 00:39:15.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:39:15.251 10:50:16 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:15.251 10:50:16 version -- common/autotest_common.sh@1711 -- # lcov --version 00:39:15.251 10:50:16 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:15.511 10:50:16 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:15.511 10:50:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:15.511 10:50:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:15.511 10:50:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:15.511 10:50:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:39:15.511 10:50:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:39:15.511 10:50:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:39:15.511 10:50:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:39:15.511 10:50:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:39:15.511 10:50:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:39:15.511 10:50:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:39:15.511 10:50:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:15.511 10:50:16 version -- scripts/common.sh@344 -- # case "$op" in 00:39:15.511 10:50:16 version -- scripts/common.sh@345 -- # : 1 00:39:15.511 10:50:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:15.511 10:50:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:15.511 10:50:16 version -- scripts/common.sh@365 -- # decimal 1 00:39:15.511 10:50:16 version -- scripts/common.sh@353 -- # local d=1 00:39:15.511 10:50:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:15.511 10:50:16 version -- scripts/common.sh@355 -- # echo 1 00:39:15.511 10:50:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:39:15.511 10:50:16 version -- scripts/common.sh@366 -- # decimal 2 00:39:15.511 10:50:16 version -- scripts/common.sh@353 -- # local d=2 00:39:15.511 10:50:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:15.511 10:50:16 version -- scripts/common.sh@355 -- # echo 2 00:39:15.511 10:50:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:39:15.511 10:50:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:15.511 10:50:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:15.511 10:50:16 version -- scripts/common.sh@368 -- # return 0 00:39:15.511 10:50:16 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:15.511 10:50:16 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:15.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.511 --rc genhtml_branch_coverage=1 00:39:15.511 --rc genhtml_function_coverage=1 00:39:15.511 --rc genhtml_legend=1 00:39:15.511 --rc geninfo_all_blocks=1 00:39:15.511 --rc geninfo_unexecuted_blocks=1 00:39:15.511 00:39:15.511 ' 00:39:15.511 10:50:16 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:15.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.511 --rc genhtml_branch_coverage=1 00:39:15.511 --rc genhtml_function_coverage=1 00:39:15.511 --rc genhtml_legend=1 00:39:15.511 --rc geninfo_all_blocks=1 00:39:15.511 --rc geninfo_unexecuted_blocks=1 00:39:15.511 00:39:15.511 ' 00:39:15.511 10:50:16 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:15.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.511 --rc genhtml_branch_coverage=1 00:39:15.511 --rc genhtml_function_coverage=1 00:39:15.511 --rc genhtml_legend=1 00:39:15.511 --rc geninfo_all_blocks=1 00:39:15.511 --rc geninfo_unexecuted_blocks=1 00:39:15.511 00:39:15.511 ' 00:39:15.511 10:50:16 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:15.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.511 --rc genhtml_branch_coverage=1 00:39:15.511 --rc genhtml_function_coverage=1 00:39:15.511 --rc genhtml_legend=1 00:39:15.511 --rc geninfo_all_blocks=1 00:39:15.511 --rc geninfo_unexecuted_blocks=1 00:39:15.511 00:39:15.511 ' 00:39:15.511 10:50:16 version -- app/version.sh@17 -- # get_header_version major 00:39:15.511 10:50:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:39:15.511 10:50:16 version -- app/version.sh@14 -- # cut -f2 00:39:15.511 10:50:16 version -- app/version.sh@14 -- # tr -d '"' 00:39:15.511 10:50:16 version -- app/version.sh@17 -- # major=25 00:39:15.511 10:50:16 version -- app/version.sh@18 -- # get_header_version minor 00:39:15.511 10:50:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:39:15.511 10:50:16 version -- app/version.sh@14 -- # cut -f2 00:39:15.511 10:50:16 version -- app/version.sh@14 -- # tr -d '"' 00:39:15.511 10:50:16 version -- app/version.sh@18 -- # minor=1 00:39:15.511 10:50:16 version -- app/version.sh@19 -- # get_header_version patch 00:39:15.511 10:50:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:39:15.511 10:50:16 version -- app/version.sh@14 -- # cut -f2 00:39:15.511 10:50:16 version -- app/version.sh@14 -- # tr -d '"' 00:39:15.511 10:50:16 version -- app/version.sh@19 -- # patch=0 00:39:15.511 10:50:16 version -- app/version.sh@20 -- # get_header_version suffix 00:39:15.511 10:50:16 version -- app/version.sh@14 -- # cut -f2 00:39:15.511 10:50:16 version -- app/version.sh@14 -- # tr -d '"' 00:39:15.511 10:50:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:39:15.511 10:50:16 version -- app/version.sh@20 -- # suffix=-pre 00:39:15.511 10:50:16 version -- app/version.sh@22 -- # version=25.1 00:39:15.511 10:50:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:39:15.511 10:50:16 version -- app/version.sh@28 -- # version=25.1rc0 00:39:15.511 10:50:16 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:39:15.511 10:50:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:39:15.511 10:50:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:39:15.511 10:50:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:39:15.511 00:39:15.511 real 0m0.292s 00:39:15.511 user 0m0.168s 00:39:15.511 sys 0m0.170s 00:39:15.511 10:50:16 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:15.511 10:50:16 version -- common/autotest_common.sh@10 -- # set +x 00:39:15.511 ************************************ 00:39:15.511 END TEST version 00:39:15.511 ************************************ 00:39:15.511 10:50:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:39:15.511 10:50:16 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:39:15.511 10:50:16 -- spdk/autotest.sh@194 -- # uname -s 00:39:15.511 10:50:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:39:15.511 10:50:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:39:15.511 10:50:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:39:15.511 10:50:16 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:39:15.511 10:50:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:39:15.511 10:50:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:39:15.511 10:50:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:15.511 10:50:16 -- common/autotest_common.sh@10 -- # set +x 00:39:15.511 10:50:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:39:15.511 10:50:16 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:39:15.511 10:50:16 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:39:15.511 10:50:16 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:39:15.511 10:50:16 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:39:15.511 10:50:16 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:39:15.511 10:50:16 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:39:15.511 10:50:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:15.511 10:50:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:15.511 10:50:16 -- common/autotest_common.sh@10 -- # set +x 00:39:15.511 ************************************ 00:39:15.511 START TEST nvmf_tcp 00:39:15.511 ************************************ 00:39:15.511 10:50:16 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:39:15.769 * Looking for test storage... 00:39:15.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:15.769 10:50:16 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:15.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.769 --rc genhtml_branch_coverage=1 00:39:15.769 --rc genhtml_function_coverage=1 00:39:15.769 --rc genhtml_legend=1 00:39:15.769 --rc geninfo_all_blocks=1 00:39:15.769 --rc geninfo_unexecuted_blocks=1 00:39:15.769 00:39:15.769 ' 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:15.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.769 --rc genhtml_branch_coverage=1 00:39:15.769 --rc genhtml_function_coverage=1 00:39:15.769 --rc genhtml_legend=1 00:39:15.769 --rc geninfo_all_blocks=1 00:39:15.769 --rc geninfo_unexecuted_blocks=1 00:39:15.769 00:39:15.769 ' 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:15.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.769 --rc genhtml_branch_coverage=1 00:39:15.769 --rc genhtml_function_coverage=1 00:39:15.769 --rc genhtml_legend=1 00:39:15.769 --rc geninfo_all_blocks=1 00:39:15.769 --rc geninfo_unexecuted_blocks=1 00:39:15.769 00:39:15.769 ' 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:15.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.769 --rc genhtml_branch_coverage=1 00:39:15.769 --rc genhtml_function_coverage=1 00:39:15.769 --rc genhtml_legend=1 00:39:15.769 --rc geninfo_all_blocks=1 00:39:15.769 --rc geninfo_unexecuted_blocks=1 00:39:15.769 00:39:15.769 ' 00:39:15.769 10:50:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:39:15.769 10:50:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:39:15.769 10:50:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:15.769 10:50:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:15.769 ************************************ 00:39:15.769 START TEST nvmf_target_core 00:39:15.769 ************************************ 00:39:15.769 10:50:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:39:16.027 * Looking for test storage... 00:39:16.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.027 --rc genhtml_branch_coverage=1 00:39:16.027 --rc genhtml_function_coverage=1 00:39:16.027 --rc genhtml_legend=1 00:39:16.027 --rc geninfo_all_blocks=1 00:39:16.027 --rc geninfo_unexecuted_blocks=1 00:39:16.027 00:39:16.027 ' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.027 --rc genhtml_branch_coverage=1 00:39:16.027 --rc genhtml_function_coverage=1 00:39:16.027 --rc genhtml_legend=1 00:39:16.027 --rc geninfo_all_blocks=1 00:39:16.027 --rc geninfo_unexecuted_blocks=1 00:39:16.027 00:39:16.027 ' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.027 --rc genhtml_branch_coverage=1 00:39:16.027 --rc genhtml_function_coverage=1 00:39:16.027 --rc genhtml_legend=1 00:39:16.027 --rc geninfo_all_blocks=1 00:39:16.027 --rc geninfo_unexecuted_blocks=1 00:39:16.027 00:39:16.027 ' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.027 --rc genhtml_branch_coverage=1 00:39:16.027 --rc genhtml_function_coverage=1 00:39:16.027 --rc genhtml_legend=1 00:39:16.027 --rc geninfo_all_blocks=1 00:39:16.027 --rc geninfo_unexecuted_blocks=1 00:39:16.027 00:39:16.027 ' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:16.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:16.027 10:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:39:16.286 ************************************ 00:39:16.286 START TEST nvmf_abort 00:39:16.286 ************************************ 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:39:16.286 * Looking for test storage... 00:39:16.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.286 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:16.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.287 --rc genhtml_branch_coverage=1 00:39:16.287 --rc genhtml_function_coverage=1 00:39:16.287 --rc genhtml_legend=1 00:39:16.287 --rc geninfo_all_blocks=1 00:39:16.287 --rc geninfo_unexecuted_blocks=1 00:39:16.287 00:39:16.287 ' 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:16.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.287 --rc genhtml_branch_coverage=1 00:39:16.287 --rc genhtml_function_coverage=1 00:39:16.287 --rc genhtml_legend=1 00:39:16.287 --rc geninfo_all_blocks=1 00:39:16.287 --rc geninfo_unexecuted_blocks=1 00:39:16.287 00:39:16.287 ' 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:16.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.287 --rc genhtml_branch_coverage=1 00:39:16.287 --rc genhtml_function_coverage=1 00:39:16.287 --rc genhtml_legend=1 00:39:16.287 --rc geninfo_all_blocks=1 00:39:16.287 --rc geninfo_unexecuted_blocks=1 00:39:16.287 00:39:16.287 ' 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:16.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.287 --rc genhtml_branch_coverage=1 00:39:16.287 --rc genhtml_function_coverage=1 00:39:16.287 --rc genhtml_legend=1 00:39:16.287 --rc geninfo_all_blocks=1 00:39:16.287 --rc geninfo_unexecuted_blocks=1 00:39:16.287 00:39:16.287 ' 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:16.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:39:16.287 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:24.420 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:24.420 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:24.420 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:24.421 Found net devices under 0000:af:00.0: cvl_0_0 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:24.421 Found net devices under 0000:af:00.1: cvl_0_1 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:24.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:24.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:39:24.421 00:39:24.421 --- 10.0.0.2 ping statistics --- 00:39:24.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.421 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:24.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:24.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:39:24.421 00:39:24.421 --- 10.0.0.1 ping statistics --- 00:39:24.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.421 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2252149 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2252149 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2252149 ']' 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.421 [2024-12-09 10:50:24.437020] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:24.421 [2024-12-09 10:50:24.437093] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:24.421 [2024-12-09 10:50:24.539555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:24.421 [2024-12-09 10:50:24.585151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:24.421 [2024-12-09 10:50:24.585193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:24.421 [2024-12-09 10:50:24.585204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:24.421 [2024-12-09 10:50:24.585231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:24.421 [2024-12-09 10:50:24.585239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:24.421 [2024-12-09 10:50:24.586480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:24.421 [2024-12-09 10:50:24.586584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:24.421 [2024-12-09 10:50:24.586586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.421 [2024-12-09 10:50:24.746596] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.421 Malloc0 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.421 Delay0 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.421 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.422 [2024-12-09 10:50:24.828396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.422 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:39:24.422 [2024-12-09 10:50:24.975809] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:26.331 Initializing NVMe Controllers 00:39:26.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:26.331 controller IO queue size 128 less than required 00:39:26.331 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:39:26.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:39:26.331 Initialization complete. Launching workers. 00:39:26.331 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 24433 00:39:26.331 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 24498, failed to submit 62 00:39:26.331 success 24437, unsuccessful 61, failed 0 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:26.331 rmmod nvme_tcp 00:39:26.331 rmmod nvme_fabrics 00:39:26.331 rmmod nvme_keyring 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2252149 ']' 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2252149 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2252149 ']' 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2252149 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2252149 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2252149' 00:39:26.331 killing process with pid 2252149 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2252149 00:39:26.331 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2252149 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:26.590 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:28.501 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:28.501 00:39:28.501 real 0m12.443s 00:39:28.501 user 0m13.009s 00:39:28.501 sys 0m5.979s 00:39:28.501 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:28.501 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:28.501 ************************************ 00:39:28.501 END TEST nvmf_abort 00:39:28.501 ************************************ 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:39:28.761 ************************************ 00:39:28.761 START TEST nvmf_ns_hotplug_stress 00:39:28.761 ************************************ 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:39:28.761 * Looking for test storage... 00:39:28.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:28.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.761 --rc genhtml_branch_coverage=1 00:39:28.761 --rc genhtml_function_coverage=1 00:39:28.761 --rc genhtml_legend=1 00:39:28.761 --rc geninfo_all_blocks=1 00:39:28.761 --rc geninfo_unexecuted_blocks=1 00:39:28.761 00:39:28.761 ' 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:28.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.761 --rc genhtml_branch_coverage=1 00:39:28.761 --rc genhtml_function_coverage=1 00:39:28.761 --rc genhtml_legend=1 00:39:28.761 --rc geninfo_all_blocks=1 00:39:28.761 --rc geninfo_unexecuted_blocks=1 00:39:28.761 00:39:28.761 ' 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:28.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.761 --rc genhtml_branch_coverage=1 00:39:28.761 --rc genhtml_function_coverage=1 00:39:28.761 --rc genhtml_legend=1 00:39:28.761 --rc geninfo_all_blocks=1 00:39:28.761 --rc geninfo_unexecuted_blocks=1 00:39:28.761 00:39:28.761 ' 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:28.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.761 --rc genhtml_branch_coverage=1 00:39:28.761 --rc genhtml_function_coverage=1 00:39:28.761 --rc genhtml_legend=1 00:39:28.761 --rc geninfo_all_blocks=1 00:39:28.761 --rc geninfo_unexecuted_blocks=1 00:39:28.761 00:39:28.761 ' 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:28.761 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:29.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:39:29.022 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:39:35.587 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:35.588 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:35.588 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:35.588 Found net devices under 0000:af:00.0: cvl_0_0 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:35.588 Found net devices under 0000:af:00.1: cvl_0_1 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:35.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:35.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:39:35.588 00:39:35.588 --- 10.0.0.2 ping statistics --- 00:39:35.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:35.588 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:35.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:35.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:39:35.588 00:39:35.588 --- 10.0.0.1 ping statistics --- 00:39:35.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:35.588 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:35.588 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2255626 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2255626 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2255626 ']' 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:35.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:35.589 [2024-12-09 10:50:36.345895] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:39:35.589 [2024-12-09 10:50:36.345950] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:35.589 [2024-12-09 10:50:36.429290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:35.589 [2024-12-09 10:50:36.475335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:35.589 [2024-12-09 10:50:36.475374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:35.589 [2024-12-09 10:50:36.475384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:35.589 [2024-12-09 10:50:36.475394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:35.589 [2024-12-09 10:50:36.475402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:35.589 [2024-12-09 10:50:36.476639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:35.589 [2024-12-09 10:50:36.476732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:35.589 [2024-12-09 10:50:36.476735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:39:35.589 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:35.849 [2024-12-09 10:50:36.904851] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:35.849 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:36.108 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:36.108 [2024-12-09 10:50:37.282634] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:36.368 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:36.368 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:39:36.627 Malloc0 00:39:36.627 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:36.887 Delay0 00:39:36.887 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:37.146 10:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:37.406 NULL1 00:39:37.406 10:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:37.666 10:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:37.666 10:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2255989 00:39:37.666 10:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:37.666 10:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:37.926 10:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:37.926 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:37.926 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:38.186 true 00:39:38.445 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:38.445 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:38.705 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:38.705 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:39:38.705 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:39:39.275 true 00:39:39.275 10:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:39.275 10:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.534 10:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:39.793 10:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:39:39.793 10:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:39:40.053 true 00:39:40.053 10:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:40.053 10:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.053 10:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:40.622 10:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:39:40.622 10:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:39:40.882 true 00:39:40.882 10:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:40.882 10:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:41.142 10:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:41.402 10:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:41.402 10:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:41.661 true 00:39:41.661 10:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:41.661 10:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:41.921 10:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:42.181 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:42.181 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:42.441 true 00:39:42.441 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:42.441 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:42.700 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:42.959 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:42.959 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:43.218 true 00:39:43.218 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:43.218 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:43.788 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:44.049 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:44.049 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:44.308 true 00:39:44.308 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:44.308 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:44.568 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:44.827 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:44.827 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:45.086 true 00:39:45.086 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:45.086 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:45.345 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:45.914 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:45.914 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:45.914 true 00:39:46.173 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:46.173 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:46.433 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:46.692 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:46.692 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:46.951 true 00:39:46.951 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:46.951 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:47.210 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:47.468 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:47.468 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:47.727 true 00:39:47.987 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:47.987 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:48.246 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:48.505 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:48.505 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:48.764 true 00:39:48.764 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:48.764 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:49.023 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:49.282 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:49.282 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:49.541 true 00:39:49.541 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:49.541 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:49.799 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:50.064 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:50.064 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:50.328 true 00:39:50.328 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:50.328 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:50.588 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:50.848 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:50.848 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:51.107 true 00:39:51.107 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:51.107 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:51.676 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:51.676 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:51.676 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:51.934 true 00:39:51.934 10:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:51.935 10:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:52.503 10:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:52.503 10:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:52.503 10:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:52.764 true 00:39:52.764 10:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:52.764 10:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:53.332 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:53.592 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:53.592 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:53.851 true 00:39:53.851 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:53.851 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:54.110 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:54.368 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:54.368 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:54.627 true 00:39:54.627 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:54.627 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:54.886 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:55.145 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:55.145 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:55.404 true 00:39:55.663 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:55.663 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:55.923 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:56.184 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:56.184 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:56.444 true 00:39:56.444 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:56.444 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:56.704 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:56.964 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:56.964 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:57.224 true 00:39:57.224 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:57.224 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:57.484 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:57.748 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:57.748 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:58.007 true 00:39:58.267 10:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:58.267 10:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:58.527 10:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:58.787 10:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:58.787 10:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:59.047 true 00:39:59.047 10:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:59.047 10:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:59.307 10:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:59.567 10:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:59.567 10:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:59.828 true 00:39:59.828 10:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:39:59.828 10:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:00.088 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:00.347 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:40:00.347 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:40:00.606 true 00:40:00.606 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:40:00.865 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:00.865 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:01.435 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:40:01.435 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:40:01.435 true 00:40:01.435 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:40:01.435 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:01.695 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:02.265 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:40:02.265 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:40:02.265 true 00:40:02.265 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:40:02.265 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:02.834 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:03.094 10:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:40:03.094 10:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:40:03.353 true 00:40:03.353 10:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:40:03.353 10:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:03.612 10:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:03.871 10:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:40:03.871 10:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:40:04.130 true 00:40:04.394 10:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:40:04.394 10:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:04.653 10:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:04.912 10:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:40:04.912 10:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:40:05.171 true 00:40:05.171 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:40:05.171 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:05.431 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:05.690 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:40:05.690 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:40:05.949 true 00:40:05.949 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:40:05.949 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:06.517 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:06.776 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:40:06.776 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:40:07.053 true 00:40:07.053 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:40:07.053 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:07.379 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:07.639 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:40:07.639 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:40:07.899 true 00:40:07.899 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:40:07.899 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:07.899 Initializing NVMe Controllers 00:40:07.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:07.899 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:40:07.899 Controller IO queue size 128, less than required. 00:40:07.899 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:07.899 WARNING: Some requested NVMe devices were skipped 00:40:07.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:07.899 Initialization complete. Launching workers. 00:40:07.899 ======================================================== 00:40:07.899 Latency(us) 00:40:07.899 Device Information : IOPS MiB/s Average min max 00:40:07.899 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27314.73 13.34 4685.57 1759.66 10057.04 00:40:07.899 ======================================================== 00:40:07.900 Total : 27314.73 13.34 4685.57 1759.66 10057.04 00:40:07.900 00:40:08.159 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:08.419 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:40:08.419 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:40:08.678 true 00:40:08.678 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2255989 00:40:08.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2255989) - No such process 00:40:08.678 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2255989 00:40:08.678 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:08.937 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:09.196 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:40:09.197 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:40:09.197 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:40:09.197 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:09.197 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:40:09.456 null0 00:40:09.456 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:09.456 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:09.456 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:40:09.716 null1 00:40:09.975 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:09.975 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:09.975 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:40:09.975 null2 00:40:09.975 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:09.975 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:09.975 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:40:10.235 null3 00:40:10.494 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:10.494 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:10.494 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:40:10.753 null4 00:40:10.753 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:10.753 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:10.753 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:40:11.012 null5 00:40:11.012 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:11.012 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:11.012 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:40:11.272 null6 00:40:11.272 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:11.272 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:11.272 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:40:11.532 null7 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:40:11.532 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2261095 2261096 2261098 2261100 2261102 2261104 2261106 2261107 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:11.533 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:11.793 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:11.793 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:11.793 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:11.793 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:11.793 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:11.793 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:11.793 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:11.793 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:12.052 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.052 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.052 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:12.052 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.052 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.052 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.053 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:12.313 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.313 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:12.313 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:12.313 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:12.313 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:12.313 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:12.313 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:12.313 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.572 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:12.832 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.832 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:12.832 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.091 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:13.351 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.351 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.351 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:13.351 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.351 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.351 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:13.351 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.351 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.351 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:13.351 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:13.611 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:13.611 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:13.611 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:13.611 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:13.611 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:13.611 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:13.611 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:13.611 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.611 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.611 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:13.871 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.129 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:14.389 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:14.648 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:14.648 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:14.648 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:14.648 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:14.648 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:14.648 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.908 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:14.908 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:15.168 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.168 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:15.168 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:15.168 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:15.168 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:15.168 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:15.168 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:15.427 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:15.686 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.686 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:15.686 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:15.686 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:15.686 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:15.686 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:15.686 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:15.687 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.687 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.687 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.946 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:15.946 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.946 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.946 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:15.946 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.946 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.946 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.946 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.206 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.466 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.467 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:16.467 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.467 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.467 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:16.467 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.467 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.467 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:16.467 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:16.727 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:16.727 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.727 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.727 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:16.727 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:16.727 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:16.727 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:16.727 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:16.727 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:16.987 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.987 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.987 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:16.987 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.987 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.987 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.987 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.987 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.987 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.987 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.987 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.987 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.987 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:16.987 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:16.987 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:17.247 rmmod nvme_tcp 00:40:17.247 rmmod nvme_fabrics 00:40:17.247 rmmod nvme_keyring 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2255626 ']' 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2255626 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2255626 ']' 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2255626 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:17.247 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2255626 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2255626' 00:40:17.507 killing process with pid 2255626 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2255626 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2255626 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:17.507 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:17.508 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.508 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:17.508 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:20.049 00:40:20.049 real 0m51.010s 00:40:20.049 user 3m40.886s 00:40:20.049 sys 0m21.314s 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:20.049 ************************************ 00:40:20.049 END TEST nvmf_ns_hotplug_stress 00:40:20.049 ************************************ 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:40:20.049 ************************************ 00:40:20.049 START TEST nvmf_delete_subsystem 00:40:20.049 ************************************ 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:40:20.049 * Looking for test storage... 00:40:20.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:40:20.049 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:20.049 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:20.049 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:20.049 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:20.049 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:20.049 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:20.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.050 --rc genhtml_branch_coverage=1 00:40:20.050 --rc genhtml_function_coverage=1 00:40:20.050 --rc genhtml_legend=1 00:40:20.050 --rc geninfo_all_blocks=1 00:40:20.050 --rc geninfo_unexecuted_blocks=1 00:40:20.050 00:40:20.050 ' 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:20.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.050 --rc genhtml_branch_coverage=1 00:40:20.050 --rc genhtml_function_coverage=1 00:40:20.050 --rc genhtml_legend=1 00:40:20.050 --rc geninfo_all_blocks=1 00:40:20.050 --rc geninfo_unexecuted_blocks=1 00:40:20.050 00:40:20.050 ' 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:20.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.050 --rc genhtml_branch_coverage=1 00:40:20.050 --rc genhtml_function_coverage=1 00:40:20.050 --rc genhtml_legend=1 00:40:20.050 --rc geninfo_all_blocks=1 00:40:20.050 --rc geninfo_unexecuted_blocks=1 00:40:20.050 00:40:20.050 ' 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:20.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.050 --rc genhtml_branch_coverage=1 00:40:20.050 --rc genhtml_function_coverage=1 00:40:20.050 --rc genhtml_legend=1 00:40:20.050 --rc geninfo_all_blocks=1 00:40:20.050 --rc geninfo_unexecuted_blocks=1 00:40:20.050 00:40:20.050 ' 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:20.050 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:20.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:40:20.051 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:28.175 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:28.175 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:28.175 Found net devices under 0000:af:00.0: cvl_0_0 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:28.175 Found net devices under 0000:af:00.1: cvl_0_1 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:28.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:28.176 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:28.176 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:28.176 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:28.176 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:28.176 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:28.176 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:28.176 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:28.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:28.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:40:28.176 00:40:28.176 --- 10.0.0.2 ping statistics --- 00:40:28.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.176 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:28.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:28.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:40:28.176 00:40:28.176 --- 10.0.0.1 ping statistics --- 00:40:28.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.176 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2265393 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2265393 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2265393 ']' 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:28.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:28.176 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:40:28.176 [2024-12-09 10:51:28.300442] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:40:28.176 [2024-12-09 10:51:28.300516] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:28.176 [2024-12-09 10:51:28.422860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:28.176 [2024-12-09 10:51:28.477383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:28.176 [2024-12-09 10:51:28.477429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:28.176 [2024-12-09 10:51:28.477445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:28.176 [2024-12-09 10:51:28.477459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:28.176 [2024-12-09 10:51:28.477471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:28.176 [2024-12-09 10:51:28.479038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:28.176 [2024-12-09 10:51:28.479045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:28.176 [2024-12-09 10:51:29.296320] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:28.176 [2024-12-09 10:51:29.312506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:28.176 NULL1 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:28.176 Delay0 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2265592 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:40:28.176 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:28.435 [2024-12-09 10:51:29.417460] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:30.384 10:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:30.384 10:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.384 10:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 [2024-12-09 10:51:31.539455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3320000c40 is same with the state(6) to be set 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 starting I/O failed: -6 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Read completed with error (sct=0, sc=8) 00:40:30.384 Write completed with error (sct=0, sc=8) 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 starting I/O failed: -6 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 Write completed with error (sct=0, sc=8) 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 starting I/O failed: -6 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 starting I/O failed: -6 00:40:30.385 Write completed with error (sct=0, sc=8) 00:40:30.385 Write completed with error (sct=0, sc=8) 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 starting I/O failed: -6 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 Write completed with error (sct=0, sc=8) 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 Read completed with error (sct=0, sc=8) 00:40:30.385 starting I/O failed: -6 00:40:31.763 [2024-12-09 10:51:32.512243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1984720 is same with the state(6) to be set 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 [2024-12-09 10:51:32.541562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f332000d350 is same with the state(6) to be set 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 [2024-12-09 10:51:32.543202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1984900 is same with the state(6) to be set 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 [2024-12-09 10:51:32.543435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1983740 is same with the state(6) to be set 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Read completed with error (sct=0, sc=8) 00:40:31.763 Write completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Write completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Write completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 Write completed with error (sct=0, sc=8) 00:40:31.764 Read completed with error (sct=0, sc=8) 00:40:31.764 [2024-12-09 10:51:32.544364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1984ae0 is same with the state(6) to be set 00:40:31.764 Initializing NVMe Controllers 00:40:31.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:31.764 Controller IO queue size 128, less than required. 00:40:31.764 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:31.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:31.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:31.764 Initialization complete. Launching workers. 00:40:31.764 ======================================================== 00:40:31.764 Latency(us) 00:40:31.764 Device Information : IOPS MiB/s Average min max 00:40:31.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.13 0.09 1048344.98 480.85 2002503.39 00:40:31.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.32 0.07 916322.81 319.16 2000603.63 00:40:31.764 ======================================================== 00:40:31.764 Total : 323.45 0.16 987396.09 319.16 2002503.39 00:40:31.764 00:40:31.764 [2024-12-09 10:51:32.545085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1984720 (9): Bad file descriptor 00:40:31.764 10:51:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.764 10:51:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:40:31.764 10:51:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2265592 00:40:31.764 10:51:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:40:31.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2265592 00:40:32.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2265592) - No such process 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2265592 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2265592 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2265592 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:32.021 [2024-12-09 10:51:33.072096] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:32.021 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.022 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:32.022 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.022 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:32.022 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.022 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2265966 00:40:32.022 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:40:32.022 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2265966 00:40:32.022 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:32.022 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:32.022 [2024-12-09 10:51:33.155753] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:32.587 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:32.587 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2265966 00:40:32.587 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:33.154 10:51:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:33.154 10:51:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2265966 00:40:33.154 10:51:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:33.722 10:51:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:33.722 10:51:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2265966 00:40:33.722 10:51:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:33.982 10:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:33.982 10:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2265966 00:40:33.982 10:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:34.549 10:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:34.549 10:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2265966 00:40:34.549 10:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:35.115 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:35.115 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2265966 00:40:35.115 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:35.374 Initializing NVMe Controllers 00:40:35.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:35.374 Controller IO queue size 128, less than required. 00:40:35.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:35.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:35.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:35.374 Initialization complete. Launching workers. 00:40:35.374 ======================================================== 00:40:35.374 Latency(us) 00:40:35.374 Device Information : IOPS MiB/s Average min max 00:40:35.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004333.82 1000149.38 1040693.92 00:40:35.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004560.68 1000191.53 1041417.95 00:40:35.374 ======================================================== 00:40:35.374 Total : 256.00 0.12 1004447.25 1000149.38 1041417.95 00:40:35.374 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2265966 00:40:35.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2265966) - No such process 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2265966 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:35.633 rmmod nvme_tcp 00:40:35.633 rmmod nvme_fabrics 00:40:35.633 rmmod nvme_keyring 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2265393 ']' 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2265393 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2265393 ']' 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2265393 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2265393 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2265393' 00:40:35.633 killing process with pid 2265393 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2265393 00:40:35.633 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2265393 00:40:35.892 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:35.892 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:35.892 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:35.892 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:40:35.892 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:40:35.892 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:40:35.892 10:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:35.892 10:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:35.892 10:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:35.892 10:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:35.892 10:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:35.892 10:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:38.449 00:40:38.449 real 0m18.257s 00:40:38.449 user 0m31.039s 00:40:38.449 sys 0m7.022s 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:38.449 ************************************ 00:40:38.449 END TEST nvmf_delete_subsystem 00:40:38.449 ************************************ 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:40:38.449 ************************************ 00:40:38.449 START TEST nvmf_host_management 00:40:38.449 ************************************ 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:40:38.449 * Looking for test storage... 00:40:38.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:40:38.449 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:38.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.450 --rc genhtml_branch_coverage=1 00:40:38.450 --rc genhtml_function_coverage=1 00:40:38.450 --rc genhtml_legend=1 00:40:38.450 --rc geninfo_all_blocks=1 00:40:38.450 --rc geninfo_unexecuted_blocks=1 00:40:38.450 00:40:38.450 ' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:38.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.450 --rc genhtml_branch_coverage=1 00:40:38.450 --rc genhtml_function_coverage=1 00:40:38.450 --rc genhtml_legend=1 00:40:38.450 --rc geninfo_all_blocks=1 00:40:38.450 --rc geninfo_unexecuted_blocks=1 00:40:38.450 00:40:38.450 ' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:38.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.450 --rc genhtml_branch_coverage=1 00:40:38.450 --rc genhtml_function_coverage=1 00:40:38.450 --rc genhtml_legend=1 00:40:38.450 --rc geninfo_all_blocks=1 00:40:38.450 --rc geninfo_unexecuted_blocks=1 00:40:38.450 00:40:38.450 ' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:38.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.450 --rc genhtml_branch_coverage=1 00:40:38.450 --rc genhtml_function_coverage=1 00:40:38.450 --rc genhtml_legend=1 00:40:38.450 --rc geninfo_all_blocks=1 00:40:38.450 --rc geninfo_unexecuted_blocks=1 00:40:38.450 00:40:38.450 ' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:38.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:40:38.450 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:45.029 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:45.029 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:45.029 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:45.030 Found net devices under 0000:af:00.0: cvl_0_0 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:45.030 Found net devices under 0000:af:00.1: cvl_0_1 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:45.030 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:45.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:45.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:40:45.030 00:40:45.030 --- 10.0.0.2 ping statistics --- 00:40:45.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:45.030 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:45.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:45.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:40:45.030 00:40:45.030 --- 10.0.0.1 ping statistics --- 00:40:45.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:45.030 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2269755 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2269755 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2269755 ']' 00:40:45.030 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.290 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:45.290 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.290 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:45.290 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.290 [2024-12-09 10:51:46.263467] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:40:45.290 [2024-12-09 10:51:46.263536] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:45.290 [2024-12-09 10:51:46.364265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:45.290 [2024-12-09 10:51:46.410176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:45.290 [2024-12-09 10:51:46.410214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:45.290 [2024-12-09 10:51:46.410225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:45.290 [2024-12-09 10:51:46.410234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:45.290 [2024-12-09 10:51:46.410242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:45.290 [2024-12-09 10:51:46.413662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:45.290 [2024-12-09 10:51:46.413684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:45.290 [2024-12-09 10:51:46.413772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:45.290 [2024-12-09 10:51:46.413774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:46.231 [2024-12-09 10:51:47.245599] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:46.231 Malloc0 00:40:46.231 [2024-12-09 10:51:47.340176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2269975 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2269975 /var/tmp/bdevperf.sock 00:40:46.231 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2269975 ']' 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:46.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:46.492 { 00:40:46.492 "params": { 00:40:46.492 "name": "Nvme$subsystem", 00:40:46.492 "trtype": "$TEST_TRANSPORT", 00:40:46.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:46.492 "adrfam": "ipv4", 00:40:46.492 "trsvcid": "$NVMF_PORT", 00:40:46.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:46.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:46.492 "hdgst": ${hdgst:-false}, 00:40:46.492 "ddgst": ${ddgst:-false} 00:40:46.492 }, 00:40:46.492 "method": "bdev_nvme_attach_controller" 00:40:46.492 } 00:40:46.492 EOF 00:40:46.492 )") 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:46.492 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:46.492 "params": { 00:40:46.492 "name": "Nvme0", 00:40:46.492 "trtype": "tcp", 00:40:46.492 "traddr": "10.0.0.2", 00:40:46.492 "adrfam": "ipv4", 00:40:46.492 "trsvcid": "4420", 00:40:46.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:46.492 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:46.492 "hdgst": false, 00:40:46.492 "ddgst": false 00:40:46.492 }, 00:40:46.492 "method": "bdev_nvme_attach_controller" 00:40:46.492 }' 00:40:46.492 [2024-12-09 10:51:47.456768] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:40:46.492 [2024-12-09 10:51:47.456831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269975 ] 00:40:46.492 [2024-12-09 10:51:47.567824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:46.492 [2024-12-09 10:51:47.619042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.752 Running I/O for 10 seconds... 00:40:46.752 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:46.752 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:46.752 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:46.752 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.752 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:40:47.012 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.274 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:47.274 [2024-12-09 10:51:48.307605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.307676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.307705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.307722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.307742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.307757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.307775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.307790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.307808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.307823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.307840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.307855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.307872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.307887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.307911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.307927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.307944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.307959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.307976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.307991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.274 [2024-12-09 10:51:48.308718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.274 [2024-12-09 10:51:48.308737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.308753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.308770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.308785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.308802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.308817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.308835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.308849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.308867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.308883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.308900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.308915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.308932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.308947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.308965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.308980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.308996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.309747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.275 [2024-12-09 10:51:48.309762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:47.275 [2024-12-09 10:51:48.311175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:47.275 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.275 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:47.275 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.275 task offset: 78464 on job bdev=Nvme0n1 fails 00:40:47.275 00:40:47.275 Latency(us) 00:40:47.275 [2024-12-09T09:51:48.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:47.275 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:47.275 Job: Nvme0n1 ended in about 0.44 seconds with error 00:40:47.275 Verification LBA range: start 0x0 length 0x400 00:40:47.275 Nvme0n1 : 0.44 1306.93 81.68 145.21 0.00 42538.75 2778.16 36244.26 00:40:47.275 [2024-12-09T09:51:48.451Z] =================================================================================================================== 00:40:47.275 [2024-12-09T09:51:48.451Z] Total : 1306.93 81.68 145.21 0.00 42538.75 2778.16 36244.26 00:40:47.275 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:47.275 [2024-12-09 10:51:48.314481] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:47.275 [2024-12-09 10:51:48.314515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fda20 (9): Bad file descriptor 00:40:47.275 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.275 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:47.535 [2024-12-09 10:51:48.456961] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2269975 00:40:48.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2269975) - No such process 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:48.474 { 00:40:48.474 "params": { 00:40:48.474 "name": "Nvme$subsystem", 00:40:48.474 "trtype": "$TEST_TRANSPORT", 00:40:48.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:48.474 "adrfam": "ipv4", 00:40:48.474 "trsvcid": "$NVMF_PORT", 00:40:48.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:48.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:48.474 "hdgst": ${hdgst:-false}, 00:40:48.474 "ddgst": ${ddgst:-false} 00:40:48.474 }, 00:40:48.474 "method": "bdev_nvme_attach_controller" 00:40:48.474 } 00:40:48.474 EOF 00:40:48.474 )") 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:48.474 10:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:48.474 "params": { 00:40:48.474 "name": "Nvme0", 00:40:48.474 "trtype": "tcp", 00:40:48.474 "traddr": "10.0.0.2", 00:40:48.475 "adrfam": "ipv4", 00:40:48.475 "trsvcid": "4420", 00:40:48.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:48.475 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:48.475 "hdgst": false, 00:40:48.475 "ddgst": false 00:40:48.475 }, 00:40:48.475 "method": "bdev_nvme_attach_controller" 00:40:48.475 }' 00:40:48.475 [2024-12-09 10:51:49.389503] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:40:48.475 [2024-12-09 10:51:49.389578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270181 ] 00:40:48.475 [2024-12-09 10:51:49.516002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.475 [2024-12-09 10:51:49.567848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:49.044 Running I/O for 1 seconds... 00:40:49.985 1536.00 IOPS, 96.00 MiB/s 00:40:49.985 Latency(us) 00:40:49.985 [2024-12-09T09:51:51.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:49.985 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:49.985 Verification LBA range: start 0x0 length 0x400 00:40:49.985 Nvme0n1 : 1.01 1586.73 99.17 0.00 0.00 39509.26 5898.24 34648.60 00:40:49.985 [2024-12-09T09:51:51.162Z] =================================================================================================================== 00:40:49.986 [2024-12-09T09:51:51.162Z] Total : 1586.73 99.17 0.00 0.00 39509.26 5898.24 34648.60 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:50.247 rmmod nvme_tcp 00:40:50.247 rmmod nvme_fabrics 00:40:50.247 rmmod nvme_keyring 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2269755 ']' 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2269755 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2269755 ']' 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2269755 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269755 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269755' 00:40:50.247 killing process with pid 2269755 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2269755 00:40:50.247 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2269755 00:40:50.508 [2024-12-09 10:51:51.548459] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:50.508 10:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:53.047 00:40:53.047 real 0m14.488s 00:40:53.047 user 0m25.664s 00:40:53.047 sys 0m6.388s 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:53.047 ************************************ 00:40:53.047 END TEST nvmf_host_management 00:40:53.047 ************************************ 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:40:53.047 ************************************ 00:40:53.047 START TEST nvmf_lvol 00:40:53.047 ************************************ 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:40:53.047 * Looking for test storage... 00:40:53.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:53.047 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:53.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.048 --rc genhtml_branch_coverage=1 00:40:53.048 --rc genhtml_function_coverage=1 00:40:53.048 --rc genhtml_legend=1 00:40:53.048 --rc geninfo_all_blocks=1 00:40:53.048 --rc geninfo_unexecuted_blocks=1 00:40:53.048 00:40:53.048 ' 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:53.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.048 --rc genhtml_branch_coverage=1 00:40:53.048 --rc genhtml_function_coverage=1 00:40:53.048 --rc genhtml_legend=1 00:40:53.048 --rc geninfo_all_blocks=1 00:40:53.048 --rc geninfo_unexecuted_blocks=1 00:40:53.048 00:40:53.048 ' 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:53.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.048 --rc genhtml_branch_coverage=1 00:40:53.048 --rc genhtml_function_coverage=1 00:40:53.048 --rc genhtml_legend=1 00:40:53.048 --rc geninfo_all_blocks=1 00:40:53.048 --rc geninfo_unexecuted_blocks=1 00:40:53.048 00:40:53.048 ' 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:53.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.048 --rc genhtml_branch_coverage=1 00:40:53.048 --rc genhtml_function_coverage=1 00:40:53.048 --rc genhtml_legend=1 00:40:53.048 --rc geninfo_all_blocks=1 00:40:53.048 --rc geninfo_unexecuted_blocks=1 00:40:53.048 00:40:53.048 ' 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:53.048 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:53.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:53.048 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:59.627 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:59.627 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:59.627 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:59.628 Found net devices under 0000:af:00.0: cvl_0_0 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:59.628 Found net devices under 0000:af:00.1: cvl_0_1 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:59.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:59.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:40:59.628 00:40:59.628 --- 10.0.0.2 ping statistics --- 00:40:59.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:59.628 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:59.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:59.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:40:59.628 00:40:59.628 --- 10.0.0.1 ping statistics --- 00:40:59.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:59.628 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:59.628 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2273682 00:40:59.629 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2273682 00:40:59.629 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2273682 ']' 00:40:59.629 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:59.629 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:59.629 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:59.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:59.629 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:59.629 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:59.629 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:40:59.629 [2024-12-09 10:52:00.523418] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:40:59.629 [2024-12-09 10:52:00.523488] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:59.629 [2024-12-09 10:52:00.655577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:59.629 [2024-12-09 10:52:00.709113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:59.629 [2024-12-09 10:52:00.709163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:59.629 [2024-12-09 10:52:00.709178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:59.629 [2024-12-09 10:52:00.709192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:59.629 [2024-12-09 10:52:00.709204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:59.629 [2024-12-09 10:52:00.710897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:59.629 [2024-12-09 10:52:00.710988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:59.629 [2024-12-09 10:52:00.710993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:59.888 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:59.888 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:40:59.888 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:59.888 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:59.888 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:59.888 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:59.888 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:00.148 [2024-12-09 10:52:01.139724] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:00.148 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:00.407 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:41:00.407 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:00.667 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:41:00.667 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:41:00.667 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:41:01.235 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c524415b-59b6-4d28-b376-7d81bcf8f0da 00:41:01.235 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c524415b-59b6-4d28-b376-7d81bcf8f0da lvol 20 00:41:01.235 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f65f9f22-50ed-4ac1-a5e2-49d922200c18 00:41:01.235 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:01.804 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f65f9f22-50ed-4ac1-a5e2-49d922200c18 00:41:01.804 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:02.062 [2024-12-09 10:52:03.202798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:02.062 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:02.630 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2274163 00:41:02.630 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:41:02.630 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:41:03.569 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f65f9f22-50ed-4ac1-a5e2-49d922200c18 MY_SNAPSHOT 00:41:03.828 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c28a00f9-9fe1-4a24-8a08-31993a2c12cc 00:41:03.828 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f65f9f22-50ed-4ac1-a5e2-49d922200c18 30 00:41:04.089 10:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c28a00f9-9fe1-4a24-8a08-31993a2c12cc MY_CLONE 00:41:04.348 10:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e8d0790b-401a-4078-b06d-1b38b501eb4b 00:41:04.348 10:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e8d0790b-401a-4078-b06d-1b38b501eb4b 00:41:05.287 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2274163 00:41:13.417 Initializing NVMe Controllers 00:41:13.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:41:13.417 Controller IO queue size 128, less than required. 00:41:13.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:13.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:41:13.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:41:13.417 Initialization complete. Launching workers. 00:41:13.417 ======================================================== 00:41:13.417 Latency(us) 00:41:13.417 Device Information : IOPS MiB/s Average min max 00:41:13.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12386.00 48.38 10339.13 744.53 79129.45 00:41:13.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8779.20 34.29 14582.41 1695.23 53749.04 00:41:13.417 ======================================================== 00:41:13.417 Total : 21165.20 82.68 12099.22 744.53 79129.45 00:41:13.417 00:41:13.417 10:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:13.417 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f65f9f22-50ed-4ac1-a5e2-49d922200c18 00:41:13.417 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c524415b-59b6-4d28-b376-7d81bcf8f0da 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:13.986 rmmod nvme_tcp 00:41:13.986 rmmod nvme_fabrics 00:41:13.986 rmmod nvme_keyring 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2273682 ']' 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2273682 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2273682 ']' 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2273682 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:13.986 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273682 00:41:13.986 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:13.986 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:13.986 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273682' 00:41:13.986 killing process with pid 2273682 00:41:13.986 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2273682 00:41:13.986 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2273682 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:14.245 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:16.781 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:16.781 00:41:16.781 real 0m23.648s 00:41:16.781 user 1m6.418s 00:41:16.781 sys 0m9.380s 00:41:16.781 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:16.781 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:16.781 ************************************ 00:41:16.781 END TEST nvmf_lvol 00:41:16.781 ************************************ 00:41:16.781 10:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:41:16.781 10:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:16.781 10:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:16.781 10:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:41:16.781 ************************************ 00:41:16.781 START TEST nvmf_lvs_grow 00:41:16.781 ************************************ 00:41:16.781 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:41:16.781 * Looking for test storage... 00:41:16.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:16.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:16.782 --rc genhtml_branch_coverage=1 00:41:16.782 --rc genhtml_function_coverage=1 00:41:16.782 --rc genhtml_legend=1 00:41:16.782 --rc geninfo_all_blocks=1 00:41:16.782 --rc geninfo_unexecuted_blocks=1 00:41:16.782 00:41:16.782 ' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:16.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:16.782 --rc genhtml_branch_coverage=1 00:41:16.782 --rc genhtml_function_coverage=1 00:41:16.782 --rc genhtml_legend=1 00:41:16.782 --rc geninfo_all_blocks=1 00:41:16.782 --rc geninfo_unexecuted_blocks=1 00:41:16.782 00:41:16.782 ' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:16.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:16.782 --rc genhtml_branch_coverage=1 00:41:16.782 --rc genhtml_function_coverage=1 00:41:16.782 --rc genhtml_legend=1 00:41:16.782 --rc geninfo_all_blocks=1 00:41:16.782 --rc geninfo_unexecuted_blocks=1 00:41:16.782 00:41:16.782 ' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:16.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:16.782 --rc genhtml_branch_coverage=1 00:41:16.782 --rc genhtml_function_coverage=1 00:41:16.782 --rc genhtml_legend=1 00:41:16.782 --rc geninfo_all_blocks=1 00:41:16.782 --rc geninfo_unexecuted_blocks=1 00:41:16.782 00:41:16.782 ' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:16.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:41:16.782 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:41:16.783 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:23.352 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:23.352 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:23.352 Found net devices under 0000:af:00.0: cvl_0_0 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:23.352 Found net devices under 0000:af:00.1: cvl_0_1 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:23.352 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:23.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:23.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:41:23.353 00:41:23.353 --- 10.0.0.2 ping statistics --- 00:41:23.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.353 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:23.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:23.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:41:23.353 00:41:23.353 --- 10.0.0.1 ping statistics --- 00:41:23.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.353 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2278838 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2278838 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2278838 ']' 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:23.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:23.353 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:23.612 [2024-12-09 10:52:24.532600] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:41:23.612 [2024-12-09 10:52:24.532688] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:23.612 [2024-12-09 10:52:24.666257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:23.612 [2024-12-09 10:52:24.718882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:23.612 [2024-12-09 10:52:24.718925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:23.612 [2024-12-09 10:52:24.718941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:23.612 [2024-12-09 10:52:24.718955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:23.612 [2024-12-09 10:52:24.718967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:23.612 [2024-12-09 10:52:24.719567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.871 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:23.871 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:41:23.871 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:23.871 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:23.871 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:23.871 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:23.871 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:24.129 [2024-12-09 10:52:25.135152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:24.129 ************************************ 00:41:24.129 START TEST lvs_grow_clean 00:41:24.129 ************************************ 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:24.129 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:24.388 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:24.388 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:24.647 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=997ca7dc-1573-4968-b1dc-039850071240 00:41:24.647 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:24.647 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:24.906 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:24.906 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:24.906 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 997ca7dc-1573-4968-b1dc-039850071240 lvol 150 00:41:25.164 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=53d4d165-1065-4d9f-9660-682b8b6d34b3 00:41:25.164 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:25.423 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:25.423 [2024-12-09 10:52:26.597534] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:25.423 [2024-12-09 10:52:26.597615] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:25.682 true 00:41:25.683 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:25.683 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:25.942 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:25.942 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:26.200 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 53d4d165-1065-4d9f-9660-682b8b6d34b3 00:41:26.460 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:26.719 [2024-12-09 10:52:27.724948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:26.719 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:26.978 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2279403 00:41:26.978 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:26.978 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2279403 /var/tmp/bdevperf.sock 00:41:26.978 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2279403 ']' 00:41:26.978 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:26.978 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:26.978 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:26.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:26.978 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:26.978 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:26.978 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:26.978 [2024-12-09 10:52:28.066332] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:41:26.978 [2024-12-09 10:52:28.066415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279403 ] 00:41:27.238 [2024-12-09 10:52:28.174848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.238 [2024-12-09 10:52:28.218860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:27.238 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:27.238 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:41:27.238 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:27.807 Nvme0n1 00:41:27.807 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:27.807 [ 00:41:27.807 { 00:41:27.807 "name": "Nvme0n1", 00:41:27.807 "aliases": [ 00:41:27.807 "53d4d165-1065-4d9f-9660-682b8b6d34b3" 00:41:27.807 ], 00:41:27.807 "product_name": "NVMe disk", 00:41:27.807 "block_size": 4096, 00:41:27.807 "num_blocks": 38912, 00:41:27.807 "uuid": "53d4d165-1065-4d9f-9660-682b8b6d34b3", 00:41:27.807 "numa_id": 1, 00:41:27.807 "assigned_rate_limits": { 00:41:27.807 "rw_ios_per_sec": 0, 00:41:27.807 "rw_mbytes_per_sec": 0, 00:41:27.807 "r_mbytes_per_sec": 0, 00:41:27.807 "w_mbytes_per_sec": 0 00:41:27.807 }, 00:41:27.807 "claimed": false, 00:41:27.807 "zoned": false, 00:41:27.807 "supported_io_types": { 00:41:27.807 "read": true, 00:41:27.807 "write": true, 00:41:27.807 "unmap": true, 00:41:27.807 "flush": true, 00:41:27.807 "reset": true, 00:41:27.807 "nvme_admin": true, 00:41:27.807 "nvme_io": true, 00:41:27.807 "nvme_io_md": false, 00:41:27.807 "write_zeroes": true, 00:41:27.807 "zcopy": false, 00:41:27.807 "get_zone_info": false, 00:41:27.807 "zone_management": false, 00:41:27.807 "zone_append": false, 00:41:27.807 "compare": true, 00:41:27.807 "compare_and_write": true, 00:41:27.807 "abort": true, 00:41:27.807 "seek_hole": false, 00:41:27.807 "seek_data": false, 00:41:27.807 "copy": true, 00:41:27.807 "nvme_iov_md": false 00:41:27.807 }, 00:41:27.807 "memory_domains": [ 00:41:27.807 { 00:41:27.807 "dma_device_id": "system", 00:41:27.807 "dma_device_type": 1 00:41:27.807 } 00:41:27.807 ], 00:41:27.807 "driver_specific": { 00:41:27.807 "nvme": [ 00:41:27.807 { 00:41:27.807 "trid": { 00:41:27.807 "trtype": "TCP", 00:41:27.807 "adrfam": "IPv4", 00:41:27.807 "traddr": "10.0.0.2", 00:41:27.807 "trsvcid": "4420", 00:41:27.807 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:27.807 }, 00:41:27.807 "ctrlr_data": { 00:41:27.807 "cntlid": 1, 00:41:27.807 "vendor_id": "0x8086", 00:41:27.807 "model_number": "SPDK bdev Controller", 00:41:27.807 "serial_number": "SPDK0", 00:41:27.807 "firmware_revision": "25.01", 00:41:27.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:27.807 "oacs": { 00:41:27.807 "security": 0, 00:41:27.807 "format": 0, 00:41:27.807 "firmware": 0, 00:41:27.807 "ns_manage": 0 00:41:27.807 }, 00:41:27.807 "multi_ctrlr": true, 00:41:27.807 "ana_reporting": false 00:41:27.807 }, 00:41:27.807 "vs": { 00:41:27.807 "nvme_version": "1.3" 00:41:27.807 }, 00:41:27.807 "ns_data": { 00:41:27.807 "id": 1, 00:41:27.807 "can_share": true 00:41:27.807 } 00:41:27.807 } 00:41:27.807 ], 00:41:27.807 "mp_policy": "active_passive" 00:41:27.807 } 00:41:27.807 } 00:41:27.807 ] 00:41:28.066 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2279492 00:41:28.066 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:28.066 10:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:28.066 Running I/O for 10 seconds... 00:41:29.004 Latency(us) 00:41:29.004 [2024-12-09T09:52:30.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:29.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:29.005 Nvme0n1 : 1.00 15463.00 60.40 0.00 0.00 0.00 0.00 0.00 00:41:29.005 [2024-12-09T09:52:30.181Z] =================================================================================================================== 00:41:29.005 [2024-12-09T09:52:30.181Z] Total : 15463.00 60.40 0.00 0.00 0.00 0.00 0.00 00:41:29.005 00:41:29.942 10:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:29.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:29.942 Nvme0n1 : 2.00 15548.00 60.73 0.00 0.00 0.00 0.00 0.00 00:41:29.942 [2024-12-09T09:52:31.118Z] =================================================================================================================== 00:41:29.942 [2024-12-09T09:52:31.118Z] Total : 15548.00 60.73 0.00 0.00 0.00 0.00 0.00 00:41:29.942 00:41:30.202 true 00:41:30.202 10:52:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:30.202 10:52:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:30.461 10:52:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:30.461 10:52:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:30.461 10:52:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2279492 00:41:31.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:31.029 Nvme0n1 : 3.00 15611.00 60.98 0.00 0.00 0.00 0.00 0.00 00:41:31.029 [2024-12-09T09:52:32.205Z] =================================================================================================================== 00:41:31.029 [2024-12-09T09:52:32.205Z] Total : 15611.00 60.98 0.00 0.00 0.00 0.00 0.00 00:41:31.029 00:41:31.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:31.970 Nvme0n1 : 4.00 15671.25 61.22 0.00 0.00 0.00 0.00 0.00 00:41:31.970 [2024-12-09T09:52:33.146Z] =================================================================================================================== 00:41:31.970 [2024-12-09T09:52:33.146Z] Total : 15671.25 61.22 0.00 0.00 0.00 0.00 0.00 00:41:31.970 00:41:33.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:33.359 Nvme0n1 : 5.00 15716.20 61.39 0.00 0.00 0.00 0.00 0.00 00:41:33.359 [2024-12-09T09:52:34.535Z] =================================================================================================================== 00:41:33.359 [2024-12-09T09:52:34.535Z] Total : 15716.20 61.39 0.00 0.00 0.00 0.00 0.00 00:41:33.359 00:41:33.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:33.928 Nvme0n1 : 6.00 15727.50 61.44 0.00 0.00 0.00 0.00 0.00 00:41:33.928 [2024-12-09T09:52:35.104Z] =================================================================================================================== 00:41:33.928 [2024-12-09T09:52:35.104Z] Total : 15727.50 61.44 0.00 0.00 0.00 0.00 0.00 00:41:33.928 00:41:35.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:35.306 Nvme0n1 : 7.00 15751.71 61.53 0.00 0.00 0.00 0.00 0.00 00:41:35.306 [2024-12-09T09:52:36.482Z] =================================================================================================================== 00:41:35.306 [2024-12-09T09:52:36.483Z] Total : 15751.71 61.53 0.00 0.00 0.00 0.00 0.00 00:41:35.307 00:41:36.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:36.244 Nvme0n1 : 8.00 15761.62 61.57 0.00 0.00 0.00 0.00 0.00 00:41:36.244 [2024-12-09T09:52:37.420Z] =================================================================================================================== 00:41:36.244 [2024-12-09T09:52:37.420Z] Total : 15761.62 61.57 0.00 0.00 0.00 0.00 0.00 00:41:36.244 00:41:37.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:37.181 Nvme0n1 : 9.00 15779.67 61.64 0.00 0.00 0.00 0.00 0.00 00:41:37.181 [2024-12-09T09:52:38.357Z] =================================================================================================================== 00:41:37.181 [2024-12-09T09:52:38.357Z] Total : 15779.67 61.64 0.00 0.00 0.00 0.00 0.00 00:41:37.181 00:41:38.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:38.118 Nvme0n1 : 10.00 15787.50 61.67 0.00 0.00 0.00 0.00 0.00 00:41:38.118 [2024-12-09T09:52:39.294Z] =================================================================================================================== 00:41:38.118 [2024-12-09T09:52:39.294Z] Total : 15787.50 61.67 0.00 0.00 0.00 0.00 0.00 00:41:38.118 00:41:38.118 00:41:38.118 Latency(us) 00:41:38.118 [2024-12-09T09:52:39.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:38.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:38.118 Nvme0n1 : 10.00 15794.11 61.70 0.00 0.00 8100.53 4900.95 18236.10 00:41:38.118 [2024-12-09T09:52:39.294Z] =================================================================================================================== 00:41:38.118 [2024-12-09T09:52:39.294Z] Total : 15794.11 61.70 0.00 0.00 8100.53 4900.95 18236.10 00:41:38.118 { 00:41:38.118 "results": [ 00:41:38.118 { 00:41:38.118 "job": "Nvme0n1", 00:41:38.118 "core_mask": "0x2", 00:41:38.118 "workload": "randwrite", 00:41:38.118 "status": "finished", 00:41:38.118 "queue_depth": 128, 00:41:38.118 "io_size": 4096, 00:41:38.118 "runtime": 10.003918, 00:41:38.118 "iops": 15794.111866970521, 00:41:38.118 "mibps": 61.6957494803536, 00:41:38.118 "io_failed": 0, 00:41:38.118 "io_timeout": 0, 00:41:38.118 "avg_latency_us": 8100.533029703069, 00:41:38.118 "min_latency_us": 4900.953043478261, 00:41:38.118 "max_latency_us": 18236.104347826087 00:41:38.118 } 00:41:38.118 ], 00:41:38.118 "core_count": 1 00:41:38.118 } 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2279403 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2279403 ']' 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2279403 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279403 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279403' 00:41:38.118 killing process with pid 2279403 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2279403 00:41:38.118 Received shutdown signal, test time was about 10.000000 seconds 00:41:38.118 00:41:38.118 Latency(us) 00:41:38.118 [2024-12-09T09:52:39.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:38.118 [2024-12-09T09:52:39.294Z] =================================================================================================================== 00:41:38.118 [2024-12-09T09:52:39.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:38.118 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2279403 00:41:38.377 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:38.637 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:38.895 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:38.895 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:39.154 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:39.154 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:41:39.154 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:39.412 [2024-12-09 10:52:40.531690] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:39.412 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:39.672 request: 00:41:39.672 { 00:41:39.672 "uuid": "997ca7dc-1573-4968-b1dc-039850071240", 00:41:39.672 "method": "bdev_lvol_get_lvstores", 00:41:39.672 "req_id": 1 00:41:39.672 } 00:41:39.672 Got JSON-RPC error response 00:41:39.672 response: 00:41:39.672 { 00:41:39.672 "code": -19, 00:41:39.672 "message": "No such device" 00:41:39.672 } 00:41:39.931 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:41:39.931 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:39.931 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:39.931 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:39.931 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:40.190 aio_bdev 00:41:40.190 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 53d4d165-1065-4d9f-9660-682b8b6d34b3 00:41:40.190 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=53d4d165-1065-4d9f-9660-682b8b6d34b3 00:41:40.190 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:40.190 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:41:40.190 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:40.190 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:40.190 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:40.450 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 53d4d165-1065-4d9f-9660-682b8b6d34b3 -t 2000 00:41:40.710 [ 00:41:40.710 { 00:41:40.710 "name": "53d4d165-1065-4d9f-9660-682b8b6d34b3", 00:41:40.710 "aliases": [ 00:41:40.710 "lvs/lvol" 00:41:40.710 ], 00:41:40.710 "product_name": "Logical Volume", 00:41:40.710 "block_size": 4096, 00:41:40.710 "num_blocks": 38912, 00:41:40.710 "uuid": "53d4d165-1065-4d9f-9660-682b8b6d34b3", 00:41:40.710 "assigned_rate_limits": { 00:41:40.710 "rw_ios_per_sec": 0, 00:41:40.710 "rw_mbytes_per_sec": 0, 00:41:40.710 "r_mbytes_per_sec": 0, 00:41:40.710 "w_mbytes_per_sec": 0 00:41:40.710 }, 00:41:40.710 "claimed": false, 00:41:40.710 "zoned": false, 00:41:40.710 "supported_io_types": { 00:41:40.710 "read": true, 00:41:40.710 "write": true, 00:41:40.710 "unmap": true, 00:41:40.710 "flush": false, 00:41:40.710 "reset": true, 00:41:40.710 "nvme_admin": false, 00:41:40.710 "nvme_io": false, 00:41:40.710 "nvme_io_md": false, 00:41:40.710 "write_zeroes": true, 00:41:40.710 "zcopy": false, 00:41:40.710 "get_zone_info": false, 00:41:40.710 "zone_management": false, 00:41:40.710 "zone_append": false, 00:41:40.710 "compare": false, 00:41:40.710 "compare_and_write": false, 00:41:40.710 "abort": false, 00:41:40.710 "seek_hole": true, 00:41:40.710 "seek_data": true, 00:41:40.710 "copy": false, 00:41:40.710 "nvme_iov_md": false 00:41:40.710 }, 00:41:40.710 "driver_specific": { 00:41:40.710 "lvol": { 00:41:40.710 "lvol_store_uuid": "997ca7dc-1573-4968-b1dc-039850071240", 00:41:40.710 "base_bdev": "aio_bdev", 00:41:40.710 "thin_provision": false, 00:41:40.710 "num_allocated_clusters": 38, 00:41:40.710 "snapshot": false, 00:41:40.710 "clone": false, 00:41:40.710 "esnap_clone": false 00:41:40.710 } 00:41:40.710 } 00:41:40.710 } 00:41:40.710 ] 00:41:40.710 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:41:40.710 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:40.710 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:40.970 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:40.970 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:40.970 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:41.230 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:41.230 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 53d4d165-1065-4d9f-9660-682b8b6d34b3 00:41:41.489 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 997ca7dc-1573-4968-b1dc-039850071240 00:41:41.747 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:42.007 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:42.007 00:41:42.007 real 0m17.965s 00:41:42.007 user 0m16.958s 00:41:42.007 sys 0m2.402s 00:41:42.007 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:42.007 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:42.007 ************************************ 00:41:42.007 END TEST lvs_grow_clean 00:41:42.007 ************************************ 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:42.266 ************************************ 00:41:42.266 START TEST lvs_grow_dirty 00:41:42.266 ************************************ 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:42.266 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:42.525 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:42.525 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:42.784 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:42.784 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:42.784 10:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:43.043 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:43.043 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:43.043 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 lvol 150 00:41:43.302 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dc65e856-79ed-4e91-bd8c-97cdcf0a59fd 00:41:43.302 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:43.302 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:43.562 [2024-12-09 10:52:44.649576] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:43.562 [2024-12-09 10:52:44.649667] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:43.562 true 00:41:43.562 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:43.562 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:43.822 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:43.822 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:44.093 10:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc65e856-79ed-4e91-bd8c-97cdcf0a59fd 00:41:44.358 10:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:44.617 [2024-12-09 10:52:45.756890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:44.617 10:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:44.876 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2281786 00:41:44.876 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:44.876 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:44.876 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2281786 /var/tmp/bdevperf.sock 00:41:44.876 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2281786 ']' 00:41:44.876 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:44.876 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:44.876 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:44.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:44.876 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:44.876 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:45.136 [2024-12-09 10:52:46.091631] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:41:45.136 [2024-12-09 10:52:46.091720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281786 ] 00:41:45.136 [2024-12-09 10:52:46.187923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.136 [2024-12-09 10:52:46.230245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:45.396 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:45.396 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:45.396 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:45.656 Nvme0n1 00:41:45.656 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:45.916 [ 00:41:45.917 { 00:41:45.917 "name": "Nvme0n1", 00:41:45.917 "aliases": [ 00:41:45.917 "dc65e856-79ed-4e91-bd8c-97cdcf0a59fd" 00:41:45.917 ], 00:41:45.917 "product_name": "NVMe disk", 00:41:45.917 "block_size": 4096, 00:41:45.917 "num_blocks": 38912, 00:41:45.917 "uuid": "dc65e856-79ed-4e91-bd8c-97cdcf0a59fd", 00:41:45.917 "numa_id": 1, 00:41:45.917 "assigned_rate_limits": { 00:41:45.917 "rw_ios_per_sec": 0, 00:41:45.917 "rw_mbytes_per_sec": 0, 00:41:45.917 "r_mbytes_per_sec": 0, 00:41:45.917 "w_mbytes_per_sec": 0 00:41:45.917 }, 00:41:45.917 "claimed": false, 00:41:45.917 "zoned": false, 00:41:45.917 "supported_io_types": { 00:41:45.917 "read": true, 00:41:45.917 "write": true, 00:41:45.917 "unmap": true, 00:41:45.917 "flush": true, 00:41:45.917 "reset": true, 00:41:45.917 "nvme_admin": true, 00:41:45.917 "nvme_io": true, 00:41:45.917 "nvme_io_md": false, 00:41:45.917 "write_zeroes": true, 00:41:45.917 "zcopy": false, 00:41:45.917 "get_zone_info": false, 00:41:45.917 "zone_management": false, 00:41:45.917 "zone_append": false, 00:41:45.917 "compare": true, 00:41:45.917 "compare_and_write": true, 00:41:45.917 "abort": true, 00:41:45.917 "seek_hole": false, 00:41:45.917 "seek_data": false, 00:41:45.917 "copy": true, 00:41:45.917 "nvme_iov_md": false 00:41:45.917 }, 00:41:45.917 "memory_domains": [ 00:41:45.917 { 00:41:45.917 "dma_device_id": "system", 00:41:45.917 "dma_device_type": 1 00:41:45.917 } 00:41:45.917 ], 00:41:45.917 "driver_specific": { 00:41:45.917 "nvme": [ 00:41:45.917 { 00:41:45.917 "trid": { 00:41:45.917 "trtype": "TCP", 00:41:45.917 "adrfam": "IPv4", 00:41:45.917 "traddr": "10.0.0.2", 00:41:45.917 "trsvcid": "4420", 00:41:45.917 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:45.917 }, 00:41:45.917 "ctrlr_data": { 00:41:45.917 "cntlid": 1, 00:41:45.917 "vendor_id": "0x8086", 00:41:45.917 "model_number": "SPDK bdev Controller", 00:41:45.917 "serial_number": "SPDK0", 00:41:45.917 "firmware_revision": "25.01", 00:41:45.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:45.917 "oacs": { 00:41:45.917 "security": 0, 00:41:45.917 "format": 0, 00:41:45.917 "firmware": 0, 00:41:45.917 "ns_manage": 0 00:41:45.917 }, 00:41:45.917 "multi_ctrlr": true, 00:41:45.917 "ana_reporting": false 00:41:45.917 }, 00:41:45.917 "vs": { 00:41:45.917 "nvme_version": "1.3" 00:41:45.917 }, 00:41:45.917 "ns_data": { 00:41:45.917 "id": 1, 00:41:45.917 "can_share": true 00:41:45.917 } 00:41:45.917 } 00:41:45.917 ], 00:41:45.917 "mp_policy": "active_passive" 00:41:45.917 } 00:41:45.917 } 00:41:45.917 ] 00:41:45.917 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2281966 00:41:45.917 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:45.917 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:46.177 Running I/O for 10 seconds... 00:41:47.116 Latency(us) 00:41:47.116 [2024-12-09T09:52:48.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:47.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:47.116 Nvme0n1 : 1.00 15488.00 60.50 0.00 0.00 0.00 0.00 0.00 00:41:47.116 [2024-12-09T09:52:48.292Z] =================================================================================================================== 00:41:47.116 [2024-12-09T09:52:48.292Z] Total : 15488.00 60.50 0.00 0.00 0.00 0.00 0.00 00:41:47.116 00:41:48.056 10:52:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:48.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:48.056 Nvme0n1 : 2.00 15612.50 60.99 0.00 0.00 0.00 0.00 0.00 00:41:48.056 [2024-12-09T09:52:49.232Z] =================================================================================================================== 00:41:48.056 [2024-12-09T09:52:49.232Z] Total : 15612.50 60.99 0.00 0.00 0.00 0.00 0.00 00:41:48.056 00:41:48.315 true 00:41:48.315 10:52:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:48.315 10:52:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:48.574 10:52:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:48.574 10:52:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:48.575 10:52:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2281966 00:41:49.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:49.145 Nvme0n1 : 3.00 15651.33 61.14 0.00 0.00 0.00 0.00 0.00 00:41:49.145 [2024-12-09T09:52:50.321Z] =================================================================================================================== 00:41:49.145 [2024-12-09T09:52:50.321Z] Total : 15651.33 61.14 0.00 0.00 0.00 0.00 0.00 00:41:49.145 00:41:50.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:50.085 Nvme0n1 : 4.00 15683.50 61.26 0.00 0.00 0.00 0.00 0.00 00:41:50.085 [2024-12-09T09:52:51.261Z] =================================================================================================================== 00:41:50.085 [2024-12-09T09:52:51.261Z] Total : 15683.50 61.26 0.00 0.00 0.00 0.00 0.00 00:41:50.085 00:41:51.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:51.026 Nvme0n1 : 5.00 15721.00 61.41 0.00 0.00 0.00 0.00 0.00 00:41:51.026 [2024-12-09T09:52:52.202Z] =================================================================================================================== 00:41:51.026 [2024-12-09T09:52:52.202Z] Total : 15721.00 61.41 0.00 0.00 0.00 0.00 0.00 00:41:51.026 00:41:51.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:51.965 Nvme0n1 : 6.00 15742.33 61.49 0.00 0.00 0.00 0.00 0.00 00:41:51.965 [2024-12-09T09:52:53.141Z] =================================================================================================================== 00:41:51.965 [2024-12-09T09:52:53.141Z] Total : 15742.33 61.49 0.00 0.00 0.00 0.00 0.00 00:41:51.965 00:41:53.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:53.348 Nvme0n1 : 7.00 15756.29 61.55 0.00 0.00 0.00 0.00 0.00 00:41:53.348 [2024-12-09T09:52:54.524Z] =================================================================================================================== 00:41:53.348 [2024-12-09T09:52:54.524Z] Total : 15756.29 61.55 0.00 0.00 0.00 0.00 0.00 00:41:53.348 00:41:54.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:54.287 Nvme0n1 : 8.00 15770.25 61.60 0.00 0.00 0.00 0.00 0.00 00:41:54.287 [2024-12-09T09:52:55.463Z] =================================================================================================================== 00:41:54.287 [2024-12-09T09:52:55.463Z] Total : 15770.25 61.60 0.00 0.00 0.00 0.00 0.00 00:41:54.287 00:41:55.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:55.227 Nvme0n1 : 9.00 15781.00 61.64 0.00 0.00 0.00 0.00 0.00 00:41:55.227 [2024-12-09T09:52:56.403Z] =================================================================================================================== 00:41:55.227 [2024-12-09T09:52:56.403Z] Total : 15781.00 61.64 0.00 0.00 0.00 0.00 0.00 00:41:55.227 00:41:56.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:56.167 Nvme0n1 : 10.00 15801.10 61.72 0.00 0.00 0.00 0.00 0.00 00:41:56.167 [2024-12-09T09:52:57.343Z] =================================================================================================================== 00:41:56.167 [2024-12-09T09:52:57.343Z] Total : 15801.10 61.72 0.00 0.00 0.00 0.00 0.00 00:41:56.167 00:41:56.167 00:41:56.167 Latency(us) 00:41:56.167 [2024-12-09T09:52:57.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:56.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:56.167 Nvme0n1 : 10.01 15800.52 61.72 0.00 0.00 8096.92 1994.57 15500.69 00:41:56.167 [2024-12-09T09:52:57.343Z] =================================================================================================================== 00:41:56.167 [2024-12-09T09:52:57.343Z] Total : 15800.52 61.72 0.00 0.00 8096.92 1994.57 15500.69 00:41:56.167 { 00:41:56.167 "results": [ 00:41:56.167 { 00:41:56.167 "job": "Nvme0n1", 00:41:56.167 "core_mask": "0x2", 00:41:56.167 "workload": "randwrite", 00:41:56.167 "status": "finished", 00:41:56.167 "queue_depth": 128, 00:41:56.167 "io_size": 4096, 00:41:56.167 "runtime": 10.005433, 00:41:56.167 "iops": 15800.515579885448, 00:41:56.167 "mibps": 61.72076398392753, 00:41:56.167 "io_failed": 0, 00:41:56.167 "io_timeout": 0, 00:41:56.167 "avg_latency_us": 8096.917103935461, 00:41:56.167 "min_latency_us": 1994.5739130434783, 00:41:56.167 "max_latency_us": 15500.688695652174 00:41:56.167 } 00:41:56.167 ], 00:41:56.167 "core_count": 1 00:41:56.167 } 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2281786 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2281786 ']' 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2281786 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2281786 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2281786' 00:41:56.167 killing process with pid 2281786 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2281786 00:41:56.167 Received shutdown signal, test time was about 10.000000 seconds 00:41:56.167 00:41:56.167 Latency(us) 00:41:56.167 [2024-12-09T09:52:57.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:56.167 [2024-12-09T09:52:57.343Z] =================================================================================================================== 00:41:56.167 [2024-12-09T09:52:57.343Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:56.167 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2281786 00:41:56.428 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:56.688 10:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:56.947 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:56.948 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2278838 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2278838 00:41:57.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2278838 Killed "${NVMF_APP[@]}" "$@" 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2283401 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2283401 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2283401 ']' 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:57.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:57.207 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:57.467 [2024-12-09 10:52:58.417453] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:41:57.467 [2024-12-09 10:52:58.417527] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:57.467 [2024-12-09 10:52:58.551797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:57.467 [2024-12-09 10:52:58.603669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:57.467 [2024-12-09 10:52:58.603718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:57.467 [2024-12-09 10:52:58.603733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:57.467 [2024-12-09 10:52:58.603747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:57.467 [2024-12-09 10:52:58.603758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:57.467 [2024-12-09 10:52:58.604383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:57.734 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:57.734 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:57.734 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:57.734 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:57.734 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:57.734 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:57.734 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:57.994 [2024-12-09 10:52:58.950671] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:57.994 [2024-12-09 10:52:58.950786] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:57.994 [2024-12-09 10:52:58.950832] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:57.994 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:57.994 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dc65e856-79ed-4e91-bd8c-97cdcf0a59fd 00:41:57.994 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=dc65e856-79ed-4e91-bd8c-97cdcf0a59fd 00:41:57.994 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:57.994 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:57.994 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:57.994 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:57.994 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:58.254 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc65e856-79ed-4e91-bd8c-97cdcf0a59fd -t 2000 00:41:58.514 [ 00:41:58.514 { 00:41:58.514 "name": "dc65e856-79ed-4e91-bd8c-97cdcf0a59fd", 00:41:58.514 "aliases": [ 00:41:58.514 "lvs/lvol" 00:41:58.514 ], 00:41:58.514 "product_name": "Logical Volume", 00:41:58.514 "block_size": 4096, 00:41:58.514 "num_blocks": 38912, 00:41:58.514 "uuid": "dc65e856-79ed-4e91-bd8c-97cdcf0a59fd", 00:41:58.514 "assigned_rate_limits": { 00:41:58.514 "rw_ios_per_sec": 0, 00:41:58.514 "rw_mbytes_per_sec": 0, 00:41:58.514 "r_mbytes_per_sec": 0, 00:41:58.514 "w_mbytes_per_sec": 0 00:41:58.514 }, 00:41:58.514 "claimed": false, 00:41:58.514 "zoned": false, 00:41:58.514 "supported_io_types": { 00:41:58.514 "read": true, 00:41:58.514 "write": true, 00:41:58.514 "unmap": true, 00:41:58.514 "flush": false, 00:41:58.514 "reset": true, 00:41:58.514 "nvme_admin": false, 00:41:58.514 "nvme_io": false, 00:41:58.514 "nvme_io_md": false, 00:41:58.514 "write_zeroes": true, 00:41:58.514 "zcopy": false, 00:41:58.514 "get_zone_info": false, 00:41:58.514 "zone_management": false, 00:41:58.514 "zone_append": false, 00:41:58.514 "compare": false, 00:41:58.514 "compare_and_write": false, 00:41:58.514 "abort": false, 00:41:58.514 "seek_hole": true, 00:41:58.514 "seek_data": true, 00:41:58.514 "copy": false, 00:41:58.514 "nvme_iov_md": false 00:41:58.514 }, 00:41:58.514 "driver_specific": { 00:41:58.514 "lvol": { 00:41:58.514 "lvol_store_uuid": "4937efcf-067e-44c1-a2e4-94ebfd4557d1", 00:41:58.514 "base_bdev": "aio_bdev", 00:41:58.514 "thin_provision": false, 00:41:58.514 "num_allocated_clusters": 38, 00:41:58.514 "snapshot": false, 00:41:58.514 "clone": false, 00:41:58.514 "esnap_clone": false 00:41:58.514 } 00:41:58.514 } 00:41:58.514 } 00:41:58.514 ] 00:41:58.514 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:58.514 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:58.514 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:58.774 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:58.774 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:58.774 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:59.033 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:59.033 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:59.293 [2024-12-09 10:53:00.331651] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:59.293 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:41:59.553 request: 00:41:59.553 { 00:41:59.553 "uuid": "4937efcf-067e-44c1-a2e4-94ebfd4557d1", 00:41:59.553 "method": "bdev_lvol_get_lvstores", 00:41:59.553 "req_id": 1 00:41:59.553 } 00:41:59.553 Got JSON-RPC error response 00:41:59.553 response: 00:41:59.553 { 00:41:59.553 "code": -19, 00:41:59.553 "message": "No such device" 00:41:59.553 } 00:41:59.553 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:41:59.553 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:59.553 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:59.553 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:59.553 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:59.813 aio_bdev 00:41:59.813 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dc65e856-79ed-4e91-bd8c-97cdcf0a59fd 00:41:59.813 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=dc65e856-79ed-4e91-bd8c-97cdcf0a59fd 00:41:59.813 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:59.813 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:59.813 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:59.813 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:59.813 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:00.072 10:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc65e856-79ed-4e91-bd8c-97cdcf0a59fd -t 2000 00:42:00.332 [ 00:42:00.332 { 00:42:00.332 "name": "dc65e856-79ed-4e91-bd8c-97cdcf0a59fd", 00:42:00.332 "aliases": [ 00:42:00.332 "lvs/lvol" 00:42:00.332 ], 00:42:00.332 "product_name": "Logical Volume", 00:42:00.332 "block_size": 4096, 00:42:00.332 "num_blocks": 38912, 00:42:00.332 "uuid": "dc65e856-79ed-4e91-bd8c-97cdcf0a59fd", 00:42:00.332 "assigned_rate_limits": { 00:42:00.332 "rw_ios_per_sec": 0, 00:42:00.332 "rw_mbytes_per_sec": 0, 00:42:00.332 "r_mbytes_per_sec": 0, 00:42:00.332 "w_mbytes_per_sec": 0 00:42:00.332 }, 00:42:00.332 "claimed": false, 00:42:00.332 "zoned": false, 00:42:00.332 "supported_io_types": { 00:42:00.332 "read": true, 00:42:00.332 "write": true, 00:42:00.332 "unmap": true, 00:42:00.332 "flush": false, 00:42:00.332 "reset": true, 00:42:00.332 "nvme_admin": false, 00:42:00.332 "nvme_io": false, 00:42:00.332 "nvme_io_md": false, 00:42:00.332 "write_zeroes": true, 00:42:00.332 "zcopy": false, 00:42:00.332 "get_zone_info": false, 00:42:00.332 "zone_management": false, 00:42:00.332 "zone_append": false, 00:42:00.332 "compare": false, 00:42:00.332 "compare_and_write": false, 00:42:00.332 "abort": false, 00:42:00.332 "seek_hole": true, 00:42:00.332 "seek_data": true, 00:42:00.332 "copy": false, 00:42:00.332 "nvme_iov_md": false 00:42:00.332 }, 00:42:00.332 "driver_specific": { 00:42:00.332 "lvol": { 00:42:00.332 "lvol_store_uuid": "4937efcf-067e-44c1-a2e4-94ebfd4557d1", 00:42:00.332 "base_bdev": "aio_bdev", 00:42:00.332 "thin_provision": false, 00:42:00.332 "num_allocated_clusters": 38, 00:42:00.332 "snapshot": false, 00:42:00.332 "clone": false, 00:42:00.332 "esnap_clone": false 00:42:00.332 } 00:42:00.332 } 00:42:00.332 } 00:42:00.332 ] 00:42:00.332 10:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:42:00.332 10:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:42:00.332 10:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:00.592 10:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:00.592 10:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:42:00.592 10:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:01.161 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:01.161 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc65e856-79ed-4e91-bd8c-97cdcf0a59fd 00:42:01.161 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4937efcf-067e-44c1-a2e4-94ebfd4557d1 00:42:01.730 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:01.731 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:01.731 00:42:01.731 real 0m19.647s 00:42:01.731 user 0m50.067s 00:42:01.731 sys 0m4.547s 00:42:01.731 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:01.731 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:01.731 ************************************ 00:42:01.731 END TEST lvs_grow_dirty 00:42:01.731 ************************************ 00:42:01.990 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:42:01.990 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:42:01.990 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:42:01.990 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:42:01.990 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:42:01.990 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:42:01.990 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:42:01.990 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:42:01.990 10:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:42:01.990 nvmf_trace.0 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:01.990 rmmod nvme_tcp 00:42:01.990 rmmod nvme_fabrics 00:42:01.990 rmmod nvme_keyring 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2283401 ']' 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2283401 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2283401 ']' 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2283401 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2283401 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2283401' 00:42:01.990 killing process with pid 2283401 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2283401 00:42:01.990 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2283401 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:02.251 10:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:04.791 00:42:04.791 real 0m48.010s 00:42:04.791 user 1m14.726s 00:42:04.791 sys 0m12.381s 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:04.791 ************************************ 00:42:04.791 END TEST nvmf_lvs_grow 00:42:04.791 ************************************ 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:42:04.791 ************************************ 00:42:04.791 START TEST nvmf_bdev_io_wait 00:42:04.791 ************************************ 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:42:04.791 * Looking for test storage... 00:42:04.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:42:04.791 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:04.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.792 --rc genhtml_branch_coverage=1 00:42:04.792 --rc genhtml_function_coverage=1 00:42:04.792 --rc genhtml_legend=1 00:42:04.792 --rc geninfo_all_blocks=1 00:42:04.792 --rc geninfo_unexecuted_blocks=1 00:42:04.792 00:42:04.792 ' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:04.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.792 --rc genhtml_branch_coverage=1 00:42:04.792 --rc genhtml_function_coverage=1 00:42:04.792 --rc genhtml_legend=1 00:42:04.792 --rc geninfo_all_blocks=1 00:42:04.792 --rc geninfo_unexecuted_blocks=1 00:42:04.792 00:42:04.792 ' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:04.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.792 --rc genhtml_branch_coverage=1 00:42:04.792 --rc genhtml_function_coverage=1 00:42:04.792 --rc genhtml_legend=1 00:42:04.792 --rc geninfo_all_blocks=1 00:42:04.792 --rc geninfo_unexecuted_blocks=1 00:42:04.792 00:42:04.792 ' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:04.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.792 --rc genhtml_branch_coverage=1 00:42:04.792 --rc genhtml_function_coverage=1 00:42:04.792 --rc genhtml_legend=1 00:42:04.792 --rc geninfo_all_blocks=1 00:42:04.792 --rc geninfo_unexecuted_blocks=1 00:42:04.792 00:42:04.792 ' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:04.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:04.792 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:04.793 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:04.793 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.793 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:04.793 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:04.793 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:42:04.793 10:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:11.366 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:11.367 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:11.367 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:11.367 Found net devices under 0000:af:00.0: cvl_0_0 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:11.367 Found net devices under 0000:af:00.1: cvl_0_1 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:11.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:11.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:42:11.367 00:42:11.367 --- 10.0.0.2 ping statistics --- 00:42:11.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:11.367 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:11.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:11.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:42:11.367 00:42:11.367 --- 10.0.0.1 ping statistics --- 00:42:11.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:11.367 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:11.367 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2287237 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2287237 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2287237 ']' 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:11.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:11.367 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:11.368 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:11.368 10:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:11.368 [2024-12-09 10:53:12.086902] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:42:11.368 [2024-12-09 10:53:12.086981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:11.368 [2024-12-09 10:53:12.222494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:11.368 [2024-12-09 10:53:12.278494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:11.368 [2024-12-09 10:53:12.278544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:11.368 [2024-12-09 10:53:12.278560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:11.368 [2024-12-09 10:53:12.278574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:11.368 [2024-12-09 10:53:12.278586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:11.368 [2024-12-09 10:53:12.280525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:11.368 [2024-12-09 10:53:12.280614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:11.368 [2024-12-09 10:53:12.280711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:11.368 [2024-12-09 10:53:12.280717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:11.936 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:11.936 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:42:11.936 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:11.936 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:11.936 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:12.197 [2024-12-09 10:53:13.199744] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:12.197 Malloc0 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.197 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:12.198 [2024-12-09 10:53:13.251858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2287433 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2287435 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:12.198 { 00:42:12.198 "params": { 00:42:12.198 "name": "Nvme$subsystem", 00:42:12.198 "trtype": "$TEST_TRANSPORT", 00:42:12.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:12.198 "adrfam": "ipv4", 00:42:12.198 "trsvcid": "$NVMF_PORT", 00:42:12.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:12.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:12.198 "hdgst": ${hdgst:-false}, 00:42:12.198 "ddgst": ${ddgst:-false} 00:42:12.198 }, 00:42:12.198 "method": "bdev_nvme_attach_controller" 00:42:12.198 } 00:42:12.198 EOF 00:42:12.198 )") 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2287437 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:12.198 { 00:42:12.198 "params": { 00:42:12.198 "name": "Nvme$subsystem", 00:42:12.198 "trtype": "$TEST_TRANSPORT", 00:42:12.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:12.198 "adrfam": "ipv4", 00:42:12.198 "trsvcid": "$NVMF_PORT", 00:42:12.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:12.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:12.198 "hdgst": ${hdgst:-false}, 00:42:12.198 "ddgst": ${ddgst:-false} 00:42:12.198 }, 00:42:12.198 "method": "bdev_nvme_attach_controller" 00:42:12.198 } 00:42:12.198 EOF 00:42:12.198 )") 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2287440 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:12.198 { 00:42:12.198 "params": { 00:42:12.198 "name": "Nvme$subsystem", 00:42:12.198 "trtype": "$TEST_TRANSPORT", 00:42:12.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:12.198 "adrfam": "ipv4", 00:42:12.198 "trsvcid": "$NVMF_PORT", 00:42:12.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:12.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:12.198 "hdgst": ${hdgst:-false}, 00:42:12.198 "ddgst": ${ddgst:-false} 00:42:12.198 }, 00:42:12.198 "method": "bdev_nvme_attach_controller" 00:42:12.198 } 00:42:12.198 EOF 00:42:12.198 )") 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:12.198 { 00:42:12.198 "params": { 00:42:12.198 "name": "Nvme$subsystem", 00:42:12.198 "trtype": "$TEST_TRANSPORT", 00:42:12.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:12.198 "adrfam": "ipv4", 00:42:12.198 "trsvcid": "$NVMF_PORT", 00:42:12.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:12.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:12.198 "hdgst": ${hdgst:-false}, 00:42:12.198 "ddgst": ${ddgst:-false} 00:42:12.198 }, 00:42:12.198 "method": "bdev_nvme_attach_controller" 00:42:12.198 } 00:42:12.198 EOF 00:42:12.198 )") 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2287433 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:12.198 "params": { 00:42:12.198 "name": "Nvme1", 00:42:12.198 "trtype": "tcp", 00:42:12.198 "traddr": "10.0.0.2", 00:42:12.198 "adrfam": "ipv4", 00:42:12.198 "trsvcid": "4420", 00:42:12.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:12.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:12.198 "hdgst": false, 00:42:12.198 "ddgst": false 00:42:12.198 }, 00:42:12.198 "method": "bdev_nvme_attach_controller" 00:42:12.198 }' 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:12.198 "params": { 00:42:12.198 "name": "Nvme1", 00:42:12.198 "trtype": "tcp", 00:42:12.198 "traddr": "10.0.0.2", 00:42:12.198 "adrfam": "ipv4", 00:42:12.198 "trsvcid": "4420", 00:42:12.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:12.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:12.198 "hdgst": false, 00:42:12.198 "ddgst": false 00:42:12.198 }, 00:42:12.198 "method": "bdev_nvme_attach_controller" 00:42:12.198 }' 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:12.198 "params": { 00:42:12.198 "name": "Nvme1", 00:42:12.198 "trtype": "tcp", 00:42:12.198 "traddr": "10.0.0.2", 00:42:12.198 "adrfam": "ipv4", 00:42:12.198 "trsvcid": "4420", 00:42:12.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:12.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:12.198 "hdgst": false, 00:42:12.198 "ddgst": false 00:42:12.198 }, 00:42:12.198 "method": "bdev_nvme_attach_controller" 00:42:12.198 }' 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:12.198 10:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:12.198 "params": { 00:42:12.198 "name": "Nvme1", 00:42:12.198 "trtype": "tcp", 00:42:12.198 "traddr": "10.0.0.2", 00:42:12.198 "adrfam": "ipv4", 00:42:12.198 "trsvcid": "4420", 00:42:12.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:12.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:12.198 "hdgst": false, 00:42:12.198 "ddgst": false 00:42:12.198 }, 00:42:12.198 "method": "bdev_nvme_attach_controller" 00:42:12.198 }' 00:42:12.198 [2024-12-09 10:53:13.309750] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:42:12.198 [2024-12-09 10:53:13.309814] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:42:12.198 [2024-12-09 10:53:13.313935] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:42:12.198 [2024-12-09 10:53:13.313999] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:42:12.198 [2024-12-09 10:53:13.316667] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:42:12.198 [2024-12-09 10:53:13.316743] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:42:12.198 [2024-12-09 10:53:13.319555] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:42:12.199 [2024-12-09 10:53:13.319620] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:42:12.458 [2024-12-09 10:53:13.521802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.459 [2024-12-09 10:53:13.572184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:12.459 [2024-12-09 10:53:13.588786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.718 [2024-12-09 10:53:13.642108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:12.718 [2024-12-09 10:53:13.684949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.718 [2024-12-09 10:53:13.735366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:12.718 [2024-12-09 10:53:13.758303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.718 [2024-12-09 10:53:13.806974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:42:12.718 Running I/O for 1 seconds... 00:42:12.718 Running I/O for 1 seconds... 00:42:12.978 Running I/O for 1 seconds... 00:42:12.978 Running I/O for 1 seconds... 00:42:13.917 11746.00 IOPS, 45.88 MiB/s 00:42:13.917 Latency(us) 00:42:13.917 [2024-12-09T09:53:15.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:13.917 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:42:13.917 Nvme1n1 : 1.01 11803.46 46.11 0.00 0.00 10807.17 5812.76 16526.47 00:42:13.917 [2024-12-09T09:53:15.093Z] =================================================================================================================== 00:42:13.917 [2024-12-09T09:53:15.093Z] Total : 11803.46 46.11 0.00 0.00 10807.17 5812.76 16526.47 00:42:13.917 9375.00 IOPS, 36.62 MiB/s 00:42:13.917 Latency(us) 00:42:13.917 [2024-12-09T09:53:15.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:13.917 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:42:13.917 Nvme1n1 : 1.01 9430.36 36.84 0.00 0.00 13516.95 6268.66 19945.74 00:42:13.917 [2024-12-09T09:53:15.093Z] =================================================================================================================== 00:42:13.917 [2024-12-09T09:53:15.093Z] Total : 9430.36 36.84 0.00 0.00 13516.95 6268.66 19945.74 00:42:13.917 162584.00 IOPS, 635.09 MiB/s 00:42:13.918 Latency(us) 00:42:13.918 [2024-12-09T09:53:15.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:13.918 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:42:13.918 Nvme1n1 : 1.00 162222.13 633.68 0.00 0.00 784.54 341.93 2222.53 00:42:13.918 [2024-12-09T09:53:15.094Z] =================================================================================================================== 00:42:13.918 [2024-12-09T09:53:15.094Z] Total : 162222.13 633.68 0.00 0.00 784.54 341.93 2222.53 00:42:13.918 8574.00 IOPS, 33.49 MiB/s 00:42:13.918 Latency(us) 00:42:13.918 [2024-12-09T09:53:15.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:13.918 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:42:13.918 Nvme1n1 : 1.01 8652.14 33.80 0.00 0.00 14740.75 3519.00 23706.94 00:42:13.918 [2024-12-09T09:53:15.094Z] =================================================================================================================== 00:42:13.918 [2024-12-09T09:53:15.094Z] Total : 8652.14 33.80 0.00 0.00 14740.75 3519.00 23706.94 00:42:13.918 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2287435 00:42:13.918 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2287437 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2287440 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:14.177 rmmod nvme_tcp 00:42:14.177 rmmod nvme_fabrics 00:42:14.177 rmmod nvme_keyring 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2287237 ']' 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2287237 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2287237 ']' 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2287237 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:14.177 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2287237 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2287237' 00:42:14.436 killing process with pid 2287237 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2287237 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2287237 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:14.436 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:14.696 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:14.696 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:14.696 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:14.696 10:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:16.603 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:16.603 00:42:16.603 real 0m12.096s 00:42:16.603 user 0m20.346s 00:42:16.603 sys 0m6.721s 00:42:16.603 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:16.603 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:16.603 ************************************ 00:42:16.603 END TEST nvmf_bdev_io_wait 00:42:16.603 ************************************ 00:42:16.603 10:53:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:42:16.603 10:53:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:16.603 10:53:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:16.603 10:53:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:42:16.863 ************************************ 00:42:16.863 START TEST nvmf_queue_depth 00:42:16.863 ************************************ 00:42:16.863 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:42:16.863 * Looking for test storage... 00:42:16.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:16.863 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:16.863 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:42:16.863 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:16.863 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:16.863 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:16.863 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:16.863 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:16.863 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:42:16.864 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:16.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.864 --rc genhtml_branch_coverage=1 00:42:16.864 --rc genhtml_function_coverage=1 00:42:16.864 --rc genhtml_legend=1 00:42:16.864 --rc geninfo_all_blocks=1 00:42:16.864 --rc geninfo_unexecuted_blocks=1 00:42:16.864 00:42:16.864 ' 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:16.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.864 --rc genhtml_branch_coverage=1 00:42:16.864 --rc genhtml_function_coverage=1 00:42:16.864 --rc genhtml_legend=1 00:42:16.864 --rc geninfo_all_blocks=1 00:42:16.864 --rc geninfo_unexecuted_blocks=1 00:42:16.864 00:42:16.864 ' 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:16.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.864 --rc genhtml_branch_coverage=1 00:42:16.864 --rc genhtml_function_coverage=1 00:42:16.864 --rc genhtml_legend=1 00:42:16.864 --rc geninfo_all_blocks=1 00:42:16.864 --rc geninfo_unexecuted_blocks=1 00:42:16.864 00:42:16.864 ' 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:16.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.864 --rc genhtml_branch_coverage=1 00:42:16.864 --rc genhtml_function_coverage=1 00:42:16.864 --rc genhtml_legend=1 00:42:16.864 --rc geninfo_all_blocks=1 00:42:16.864 --rc geninfo_unexecuted_blocks=1 00:42:16.864 00:42:16.864 ' 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:16.864 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:17.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:42:17.124 10:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:23.700 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:23.700 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:23.700 Found net devices under 0000:af:00.0: cvl_0_0 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:23.700 Found net devices under 0000:af:00.1: cvl_0_1 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:23.700 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:23.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:23.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:42:23.701 00:42:23.701 --- 10.0.0.2 ping statistics --- 00:42:23.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:23.701 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:23.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:23.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:42:23.701 00:42:23.701 --- 10.0.0.1 ping statistics --- 00:42:23.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:23.701 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:23.701 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2290894 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2290894 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2290894 ']' 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:23.961 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:23.961 [2024-12-09 10:53:24.967175] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:42:23.961 [2024-12-09 10:53:24.967251] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:23.961 [2024-12-09 10:53:25.076537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.961 [2024-12-09 10:53:25.116723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:23.961 [2024-12-09 10:53:25.116766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:23.961 [2024-12-09 10:53:25.116777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:23.961 [2024-12-09 10:53:25.116787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:23.961 [2024-12-09 10:53:25.116795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:23.961 [2024-12-09 10:53:25.117276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:24.220 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:24.220 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:42:24.220 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:24.220 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:24.220 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:24.220 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:24.220 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:24.221 [2024-12-09 10:53:25.258774] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:24.221 Malloc0 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:24.221 [2024-12-09 10:53:25.302512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2291072 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2291072 /var/tmp/bdevperf.sock 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2291072 ']' 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:24.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:24.221 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:42:24.221 [2024-12-09 10:53:25.363681] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:42:24.221 [2024-12-09 10:53:25.363749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291072 ] 00:42:24.481 [2024-12-09 10:53:25.491570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:24.481 [2024-12-09 10:53:25.544362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:24.481 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:24.481 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:42:24.481 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:24.481 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.481 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:24.741 NVMe0n1 00:42:24.741 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.741 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:24.741 Running I/O for 10 seconds... 00:42:27.072 9971.00 IOPS, 38.95 MiB/s [2024-12-09T09:53:29.189Z] 10162.50 IOPS, 39.70 MiB/s [2024-12-09T09:53:30.130Z] 10170.00 IOPS, 39.73 MiB/s [2024-12-09T09:53:31.071Z] 10200.00 IOPS, 39.84 MiB/s [2024-12-09T09:53:32.010Z] 10196.20 IOPS, 39.83 MiB/s [2024-12-09T09:53:32.950Z] 10235.00 IOPS, 39.98 MiB/s [2024-12-09T09:53:34.332Z] 10240.43 IOPS, 40.00 MiB/s [2024-12-09T09:53:35.287Z] 10241.38 IOPS, 40.01 MiB/s [2024-12-09T09:53:36.231Z] 10248.00 IOPS, 40.03 MiB/s [2024-12-09T09:53:36.231Z] 10274.50 IOPS, 40.13 MiB/s 00:42:35.055 Latency(us) 00:42:35.055 [2024-12-09T09:53:36.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:35.055 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:42:35.055 Verification LBA range: start 0x0 length 0x4000 00:42:35.055 NVMe0n1 : 10.05 10307.08 40.26 0.00 0.00 98960.59 5185.89 65649.98 00:42:35.055 [2024-12-09T09:53:36.231Z] =================================================================================================================== 00:42:35.055 [2024-12-09T09:53:36.231Z] Total : 10307.08 40.26 0.00 0.00 98960.59 5185.89 65649.98 00:42:35.055 { 00:42:35.055 "results": [ 00:42:35.055 { 00:42:35.055 "job": "NVMe0n1", 00:42:35.055 "core_mask": "0x1", 00:42:35.055 "workload": "verify", 00:42:35.055 "status": "finished", 00:42:35.055 "verify_range": { 00:42:35.055 "start": 0, 00:42:35.055 "length": 16384 00:42:35.055 }, 00:42:35.055 "queue_depth": 1024, 00:42:35.055 "io_size": 4096, 00:42:35.055 "runtime": 10.049884, 00:42:35.055 "iops": 10307.084141468698, 00:42:35.055 "mibps": 40.2620474276121, 00:42:35.055 "io_failed": 0, 00:42:35.055 "io_timeout": 0, 00:42:35.055 "avg_latency_us": 98960.58780886103, 00:42:35.055 "min_latency_us": 5185.892173913044, 00:42:35.055 "max_latency_us": 65649.97565217392 00:42:35.055 } 00:42:35.055 ], 00:42:35.055 "core_count": 1 00:42:35.055 } 00:42:35.055 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2291072 00:42:35.055 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2291072 ']' 00:42:35.055 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2291072 00:42:35.055 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:42:35.055 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:35.055 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2291072 00:42:35.055 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:35.055 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:35.055 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2291072' 00:42:35.055 killing process with pid 2291072 00:42:35.055 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2291072 00:42:35.055 Received shutdown signal, test time was about 10.000000 seconds 00:42:35.055 00:42:35.055 Latency(us) 00:42:35.055 [2024-12-09T09:53:36.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:35.055 [2024-12-09T09:53:36.231Z] =================================================================================================================== 00:42:35.055 [2024-12-09T09:53:36.231Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:35.055 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2291072 00:42:35.315 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:42:35.315 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:42:35.315 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:35.315 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:35.316 rmmod nvme_tcp 00:42:35.316 rmmod nvme_fabrics 00:42:35.316 rmmod nvme_keyring 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2290894 ']' 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2290894 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2290894 ']' 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2290894 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2290894 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2290894' 00:42:35.316 killing process with pid 2290894 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2290894 00:42:35.316 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2290894 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:35.576 10:53:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:38.135 00:42:38.135 real 0m20.991s 00:42:38.135 user 0m24.108s 00:42:38.135 sys 0m6.817s 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:38.135 ************************************ 00:42:38.135 END TEST nvmf_queue_depth 00:42:38.135 ************************************ 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:42:38.135 ************************************ 00:42:38.135 START TEST nvmf_target_multipath 00:42:38.135 ************************************ 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:42:38.135 * Looking for test storage... 00:42:38.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:42:38.135 10:53:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:38.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.135 --rc genhtml_branch_coverage=1 00:42:38.135 --rc genhtml_function_coverage=1 00:42:38.135 --rc genhtml_legend=1 00:42:38.135 --rc geninfo_all_blocks=1 00:42:38.135 --rc geninfo_unexecuted_blocks=1 00:42:38.135 00:42:38.135 ' 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:38.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.135 --rc genhtml_branch_coverage=1 00:42:38.135 --rc genhtml_function_coverage=1 00:42:38.135 --rc genhtml_legend=1 00:42:38.135 --rc geninfo_all_blocks=1 00:42:38.135 --rc geninfo_unexecuted_blocks=1 00:42:38.135 00:42:38.135 ' 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:38.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.135 --rc genhtml_branch_coverage=1 00:42:38.135 --rc genhtml_function_coverage=1 00:42:38.135 --rc genhtml_legend=1 00:42:38.135 --rc geninfo_all_blocks=1 00:42:38.135 --rc geninfo_unexecuted_blocks=1 00:42:38.135 00:42:38.135 ' 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:38.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.135 --rc genhtml_branch_coverage=1 00:42:38.135 --rc genhtml_function_coverage=1 00:42:38.135 --rc genhtml_legend=1 00:42:38.135 --rc geninfo_all_blocks=1 00:42:38.135 --rc geninfo_unexecuted_blocks=1 00:42:38.135 00:42:38.135 ' 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:38.135 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:38.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:42:38.136 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:42:44.720 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:44.721 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:44.721 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:44.721 Found net devices under 0000:af:00.0: cvl_0_0 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:44.721 Found net devices under 0000:af:00.1: cvl_0_1 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:44.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:44.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:42:44.721 00:42:44.721 --- 10.0.0.2 ping statistics --- 00:42:44.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:44.721 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:44.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:44.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:42:44.721 00:42:44.721 --- 10.0.0.1 ping statistics --- 00:42:44.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:44.721 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:44.721 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:44.722 only one NIC for nvmf test 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:44.722 rmmod nvme_tcp 00:42:44.722 rmmod nvme_fabrics 00:42:44.722 rmmod nvme_keyring 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:44.722 10:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:46.733 00:42:46.733 real 0m8.962s 00:42:46.733 user 0m2.143s 00:42:46.733 sys 0m4.875s 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:46.733 ************************************ 00:42:46.733 END TEST nvmf_target_multipath 00:42:46.733 ************************************ 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:42:46.733 ************************************ 00:42:46.733 START TEST nvmf_zcopy 00:42:46.733 ************************************ 00:42:46.733 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:42:46.994 * Looking for test storage... 00:42:46.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:46.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.994 --rc genhtml_branch_coverage=1 00:42:46.994 --rc genhtml_function_coverage=1 00:42:46.994 --rc genhtml_legend=1 00:42:46.994 --rc geninfo_all_blocks=1 00:42:46.994 --rc geninfo_unexecuted_blocks=1 00:42:46.994 00:42:46.994 ' 00:42:46.994 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:46.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.995 --rc genhtml_branch_coverage=1 00:42:46.995 --rc genhtml_function_coverage=1 00:42:46.995 --rc genhtml_legend=1 00:42:46.995 --rc geninfo_all_blocks=1 00:42:46.995 --rc geninfo_unexecuted_blocks=1 00:42:46.995 00:42:46.995 ' 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:46.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.995 --rc genhtml_branch_coverage=1 00:42:46.995 --rc genhtml_function_coverage=1 00:42:46.995 --rc genhtml_legend=1 00:42:46.995 --rc geninfo_all_blocks=1 00:42:46.995 --rc geninfo_unexecuted_blocks=1 00:42:46.995 00:42:46.995 ' 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:46.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.995 --rc genhtml_branch_coverage=1 00:42:46.995 --rc genhtml_function_coverage=1 00:42:46.995 --rc genhtml_legend=1 00:42:46.995 --rc geninfo_all_blocks=1 00:42:46.995 --rc geninfo_unexecuted_blocks=1 00:42:46.995 00:42:46.995 ' 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:46.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:46.995 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:55.131 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:55.131 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:55.131 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:55.132 Found net devices under 0000:af:00.0: cvl_0_0 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:55.132 Found net devices under 0000:af:00.1: cvl_0_1 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:55.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:55.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:42:55.132 00:42:55.132 --- 10.0.0.2 ping statistics --- 00:42:55.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:55.132 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:55.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:55.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:42:55.132 00:42:55.132 --- 10.0.0.1 ping statistics --- 00:42:55.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:55.132 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2298967 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2298967 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2298967 ']' 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:55.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.132 [2024-12-09 10:53:55.379485] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:42:55.132 [2024-12-09 10:53:55.379569] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:55.132 [2024-12-09 10:53:55.480212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:55.132 [2024-12-09 10:53:55.520914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:55.132 [2024-12-09 10:53:55.520953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:55.132 [2024-12-09 10:53:55.520963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:55.132 [2024-12-09 10:53:55.520973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:55.132 [2024-12-09 10:53:55.520980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:55.132 [2024-12-09 10:53:55.521445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.132 [2024-12-09 10:53:55.670251] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:55.132 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.133 [2024-12-09 10:53:55.694475] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.133 malloc0 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:55.133 { 00:42:55.133 "params": { 00:42:55.133 "name": "Nvme$subsystem", 00:42:55.133 "trtype": "$TEST_TRANSPORT", 00:42:55.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:55.133 "adrfam": "ipv4", 00:42:55.133 "trsvcid": "$NVMF_PORT", 00:42:55.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:55.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:55.133 "hdgst": ${hdgst:-false}, 00:42:55.133 "ddgst": ${ddgst:-false} 00:42:55.133 }, 00:42:55.133 "method": "bdev_nvme_attach_controller" 00:42:55.133 } 00:42:55.133 EOF 00:42:55.133 )") 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:55.133 10:53:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:55.133 "params": { 00:42:55.133 "name": "Nvme1", 00:42:55.133 "trtype": "tcp", 00:42:55.133 "traddr": "10.0.0.2", 00:42:55.133 "adrfam": "ipv4", 00:42:55.133 "trsvcid": "4420", 00:42:55.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:55.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:55.133 "hdgst": false, 00:42:55.133 "ddgst": false 00:42:55.133 }, 00:42:55.133 "method": "bdev_nvme_attach_controller" 00:42:55.133 }' 00:42:55.133 [2024-12-09 10:53:55.800317] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:42:55.133 [2024-12-09 10:53:55.800386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298989 ] 00:42:55.133 [2024-12-09 10:53:55.927777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:55.133 [2024-12-09 10:53:55.981296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:55.133 Running I/O for 10 seconds... 00:42:57.453 8475.00 IOPS, 66.21 MiB/s [2024-12-09T09:53:59.570Z] 8536.50 IOPS, 66.69 MiB/s [2024-12-09T09:54:00.511Z] 8557.67 IOPS, 66.86 MiB/s [2024-12-09T09:54:01.454Z] 8548.75 IOPS, 66.79 MiB/s [2024-12-09T09:54:02.394Z] 8536.40 IOPS, 66.69 MiB/s [2024-12-09T09:54:03.334Z] 8542.33 IOPS, 66.74 MiB/s [2024-12-09T09:54:04.273Z] 8558.29 IOPS, 66.86 MiB/s [2024-12-09T09:54:05.656Z] 8566.38 IOPS, 66.92 MiB/s [2024-12-09T09:54:06.597Z] 8570.78 IOPS, 66.96 MiB/s [2024-12-09T09:54:06.597Z] 8578.00 IOPS, 67.02 MiB/s 00:43:05.421 Latency(us) 00:43:05.421 [2024-12-09T09:54:06.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:05.421 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:43:05.421 Verification LBA range: start 0x0 length 0x1000 00:43:05.421 Nvme1n1 : 10.01 8579.58 67.03 0.00 0.00 14859.44 1025.78 25302.59 00:43:05.421 [2024-12-09T09:54:06.597Z] =================================================================================================================== 00:43:05.421 [2024-12-09T09:54:06.597Z] Total : 8579.58 67.03 0.00 0.00 14859.44 1025.78 25302.59 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2300529 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:05.421 { 00:43:05.421 "params": { 00:43:05.421 "name": "Nvme$subsystem", 00:43:05.421 "trtype": "$TEST_TRANSPORT", 00:43:05.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:05.421 "adrfam": "ipv4", 00:43:05.421 "trsvcid": "$NVMF_PORT", 00:43:05.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:05.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:05.421 "hdgst": ${hdgst:-false}, 00:43:05.421 "ddgst": ${ddgst:-false} 00:43:05.421 }, 00:43:05.421 "method": "bdev_nvme_attach_controller" 00:43:05.421 } 00:43:05.421 EOF 00:43:05.421 )") 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:43:05.421 [2024-12-09 10:54:06.502716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.421 [2024-12-09 10:54:06.502761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:43:05.421 10:54:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:05.421 "params": { 00:43:05.421 "name": "Nvme1", 00:43:05.421 "trtype": "tcp", 00:43:05.421 "traddr": "10.0.0.2", 00:43:05.421 "adrfam": "ipv4", 00:43:05.421 "trsvcid": "4420", 00:43:05.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:05.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:05.421 "hdgst": false, 00:43:05.421 "ddgst": false 00:43:05.421 }, 00:43:05.421 "method": "bdev_nvme_attach_controller" 00:43:05.421 }' 00:43:05.421 [2024-12-09 10:54:06.514704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.421 [2024-12-09 10:54:06.514723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.421 [2024-12-09 10:54:06.526719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.421 [2024-12-09 10:54:06.526733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.421 [2024-12-09 10:54:06.538750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.421 [2024-12-09 10:54:06.538763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.421 [2024-12-09 10:54:06.550369] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:43:05.421 [2024-12-09 10:54:06.550437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300529 ] 00:43:05.421 [2024-12-09 10:54:06.550779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.421 [2024-12-09 10:54:06.550792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.421 [2024-12-09 10:54:06.566821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.421 [2024-12-09 10:54:06.566834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.421 [2024-12-09 10:54:06.578852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.421 [2024-12-09 10:54:06.578865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.421 [2024-12-09 10:54:06.590884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.421 [2024-12-09 10:54:06.590897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.682 [2024-12-09 10:54:06.602915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.682 [2024-12-09 10:54:06.602929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.682 [2024-12-09 10:54:06.614947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.682 [2024-12-09 10:54:06.614963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.682 [2024-12-09 10:54:06.626975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.682 [2024-12-09 10:54:06.626990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.682 [2024-12-09 10:54:06.639006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.682 [2024-12-09 10:54:06.639019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.682 [2024-12-09 10:54:06.651037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.682 [2024-12-09 10:54:06.651050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.682 [2024-12-09 10:54:06.663069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.682 [2024-12-09 10:54:06.663082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.682 [2024-12-09 10:54:06.675101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.682 [2024-12-09 10:54:06.675114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.682 [2024-12-09 10:54:06.677614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.682 [2024-12-09 10:54:06.687137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.682 [2024-12-09 10:54:06.687158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.682 [2024-12-09 10:54:06.699163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.682 [2024-12-09 10:54:06.699184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.682 [2024-12-09 10:54:06.711193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.682 [2024-12-09 10:54:06.711206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.723224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.723237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.731496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.683 [2024-12-09 10:54:06.735256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.735270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.747298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.747320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.759322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.759339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.771356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.771374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.783386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.783401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.795419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.795435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.807449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.807464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.819477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.819490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.831527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.831553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.843551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.843567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.683 [2024-12-09 10:54:06.855581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.683 [2024-12-09 10:54:06.855599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:06.867617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:06.867631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:06.879654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:06.879666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:06.891690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:06.891704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:06.903719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:06.903736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:06.919765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:06.919782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:06.966940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:06.966961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:06.975913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:06.975928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 Running I/O for 5 seconds... 00:43:05.944 [2024-12-09 10:54:06.992220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:06.992243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:07.006541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:07.006564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:07.021197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:07.021218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:07.036588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:07.036610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:07.050505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:07.050531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:07.064605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:07.064627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:07.078564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:07.078585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:07.092516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:07.092538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.944 [2024-12-09 10:54:07.106451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.944 [2024-12-09 10:54:07.106472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.120938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.120961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.136689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.136711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.151071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.151093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.164785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.164806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.179093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.179115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.193103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.193125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.207301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.207322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.223600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.223627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.234825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.234848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.248830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.248851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.261987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.262009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.276377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.276399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.287331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.287355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.301673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.301696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.315797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.315819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.329887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.329910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.343814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.343837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.357719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.357743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.205 [2024-12-09 10:54:07.371997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.205 [2024-12-09 10:54:07.372021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.383186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.383209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.397764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.397787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.411537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.411561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.425690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.425717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.440145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.440168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.450737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.450760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.465275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.465296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.479170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.479196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.493500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.493522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.500991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.501012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.514308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.514330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.528829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.528851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.544305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.544329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.558968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.558990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.574166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.574188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.587953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.587976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.601785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.601807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.615936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.615960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.466 [2024-12-09 10:54:07.626809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.466 [2024-12-09 10:54:07.626831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.726 [2024-12-09 10:54:07.641504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.726 [2024-12-09 10:54:07.641528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.726 [2024-12-09 10:54:07.655176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.726 [2024-12-09 10:54:07.655198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.726 [2024-12-09 10:54:07.668978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.726 [2024-12-09 10:54:07.669001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.726 [2024-12-09 10:54:07.683045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.726 [2024-12-09 10:54:07.683067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.726 [2024-12-09 10:54:07.696905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.726 [2024-12-09 10:54:07.696927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.726 [2024-12-09 10:54:07.710809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.710832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.725152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.725174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.736364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.736390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.751042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.751064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.764833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.764854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.779041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.779063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.792637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.792670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.806336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.806357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.820444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.820466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.834630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.834656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.845531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.845553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.860131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.860152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.870340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.870361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.884624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.884650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.727 [2024-12-09 10:54:07.898638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.727 [2024-12-09 10:54:07.898664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.987 [2024-12-09 10:54:07.913091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.987 [2024-12-09 10:54:07.913112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.987 [2024-12-09 10:54:07.928272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.987 [2024-12-09 10:54:07.928293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.987 [2024-12-09 10:54:07.942639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.987 [2024-12-09 10:54:07.942670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.987 [2024-12-09 10:54:07.953479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:07.953502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:07.967962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:07.967983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:07.982028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:07.982049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 16538.00 IOPS, 129.20 MiB/s [2024-12-09T09:54:08.164Z] [2024-12-09 10:54:07.993050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:07.993075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.008009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.008030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.024056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.024077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.038712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.038734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.054129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.054151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.068516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.068539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.082152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.082178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.096781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.096802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.111891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.111912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.125631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.125660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.139295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.139317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.988 [2024-12-09 10:54:08.153423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.988 [2024-12-09 10:54:08.153446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.167136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.167158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.181454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.181475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.195055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.195077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.209211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.209232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.222721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.222742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.237066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.237088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.251084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.251105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.265192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.265212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.276591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.276611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.290921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.290942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.304470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.304492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.318587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.318608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.329531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.329551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.343352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.343372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.357079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.357100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.370966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.370987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.385138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.385159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.398888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.398909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.412939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.412961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.249 [2024-12-09 10:54:08.424035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.249 [2024-12-09 10:54:08.424057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.438821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.438842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.452225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.452246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.466417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.466438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.479932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.479952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.493900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.493921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.508079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.508101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.518991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.519012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.533835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.533857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.548179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.548201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.563680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.563702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.577701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.577722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.591430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.591451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.605252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.605272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.618916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.618937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.632826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.632847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.647139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.647160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.510 [2024-12-09 10:54:08.661010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.510 [2024-12-09 10:54:08.661031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.511 [2024-12-09 10:54:08.675118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.511 [2024-12-09 10:54:08.675140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.689260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.689283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.704106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.704128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.719875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.719898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.733484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.733507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.747659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.747682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.761800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.761823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.775326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.775347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.789245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.789268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.802745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.802767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.817002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.817025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.830456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.830478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.844840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.844862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.860540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.860561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.874716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.874741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.886050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.886071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.900797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.900819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.911836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.911858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.926404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.926427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.772 [2024-12-09 10:54:08.940039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.772 [2024-12-09 10:54:08.940060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:08.954233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:08.954255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:08.965055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:08.965077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:08.979430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:08.979452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 16625.50 IOPS, 129.89 MiB/s [2024-12-09T09:54:09.209Z] [2024-12-09 10:54:08.994080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:08.994101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.009848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.009870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.023771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.023793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.038207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.038232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.053146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.053169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.067654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.067676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.081669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.081691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.096008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.096031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.107468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.107489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.122198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.122218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.133350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.133371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.147893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.147914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.161863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.161885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.033 [2024-12-09 10:54:09.176469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.033 [2024-12-09 10:54:09.176491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.034 [2024-12-09 10:54:09.192172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.034 [2024-12-09 10:54:09.192195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.034 [2024-12-09 10:54:09.206595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.034 [2024-12-09 10:54:09.206617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.220350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.220372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.235148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.235169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.250800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.250821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.265237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.265259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.275851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.275873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.290384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.290406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.304277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.304303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.315973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.315994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.330493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.330514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.344293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.344314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.358588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.358609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.370019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.370040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.384540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.384561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.398006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.398028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.412293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.412314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.423239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.423259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.438219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.438240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.454370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.454391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.295 [2024-12-09 10:54:09.469475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.295 [2024-12-09 10:54:09.469496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.555 [2024-12-09 10:54:09.484681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.555 [2024-12-09 10:54:09.484702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.555 [2024-12-09 10:54:09.499464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.555 [2024-12-09 10:54:09.499485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.555 [2024-12-09 10:54:09.514974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.555 [2024-12-09 10:54:09.514996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.555 [2024-12-09 10:54:09.530228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.555 [2024-12-09 10:54:09.530251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.544772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.544794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.559726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.559747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.575007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.575033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.589553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.589574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.603803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.603823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.618004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.618025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.628804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.628824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.643277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.643297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.657505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.657526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.671943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.671964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.687935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.687956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.701993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.702015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.716100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.716121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.556 [2024-12-09 10:54:09.727112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.556 [2024-12-09 10:54:09.727133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.741609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.741629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.755382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.755403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.769476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.769497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.783519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.783541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.797436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.797457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.811126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.811148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.825067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.825088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.839092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.839112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.852943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.852964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.866969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.866990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.880464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.880485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.894309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.894330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.908106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.908128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.922013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.922035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.936035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.936056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.949819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.949839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.963973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.963993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.816 [2024-12-09 10:54:09.977884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.816 [2024-12-09 10:54:09.977905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 16600.67 IOPS, 129.69 MiB/s [2024-12-09T09:54:10.254Z] [2024-12-09 10:54:09.991997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:09.992019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.002792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.002813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.018507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.018531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.033569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.033592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.048689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.048737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.059642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.059670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.074436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.074458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.088143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.088165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.102505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.102528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.116716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.116739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.127793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.127815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.142731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.142753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.158456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.158479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.172791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.172813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.184393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.184415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.198539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.198561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.212898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.212920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.223865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.223888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.238360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.238381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.078 [2024-12-09 10:54:10.252736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.078 [2024-12-09 10:54:10.252757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.268197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.268219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.282896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.282918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.297191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.297212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.312904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.312926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.327242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.327264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.341697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.341718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.356705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.356727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.372188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.372208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.386279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.386301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.400191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.400212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.414302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.414323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.429061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.429082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.444420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.444441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.458552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.458574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.472730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.472751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.487473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.487494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.339 [2024-12-09 10:54:10.502428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.339 [2024-12-09 10:54:10.502451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.517010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.517032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.531491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.531512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.542124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.542144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.556419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.556439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.570556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.570577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.584776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.584797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.598894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.598916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.613143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.613164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.626705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.626731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.641246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.641267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.655166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.655187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.669262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.669282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.684056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.684076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.694871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.694892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.708956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.708976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.723208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.723228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.733893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.733914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.748877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.748899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.760092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.760113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.600 [2024-12-09 10:54:10.774888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.600 [2024-12-09 10:54:10.774915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.790459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.790482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.804977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.804998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.819272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.819292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.829751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.829772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.844396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.844417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.855885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.855905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.870871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.870891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.886209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.886234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.900705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.900725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.914721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.914742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.929310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.929330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.944933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.944959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.959556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.959577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.975535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.975557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:10.989879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:10.989902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 16542.00 IOPS, 129.23 MiB/s [2024-12-09T09:54:11.037Z] [2024-12-09 10:54:11.001214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:11.001236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:11.015839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:11.015860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.861 [2024-12-09 10:54:11.029529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.861 [2024-12-09 10:54:11.029551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.043738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.043761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.057897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.057918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.071963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.071984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.085851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.085873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.099963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.099985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.113686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.113708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.127543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.127563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.141412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.141432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.155759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.155784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.170804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.170824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.185218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.185238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.200354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.200374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.214962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.214984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.225825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.225845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.240384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.240406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.254359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.254380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.268206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.268228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.282421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.282442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.122 [2024-12-09 10:54:11.296677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.122 [2024-12-09 10:54:11.296698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.310845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.310865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.324669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.324690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.338695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.338716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.352712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.352734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.367125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.367145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.377929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.377949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.391995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.392015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.405724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.405745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.419474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.419495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.433313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.433335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.447673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.447694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.462353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.462384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.477716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.477736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.491830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.491850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.506079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.506099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.521120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.521141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.535407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.535428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.383 [2024-12-09 10:54:11.549783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.383 [2024-12-09 10:54:11.549805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.561129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.561150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.575343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.575364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.589302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.589323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.603161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.603182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.616779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.616799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.630717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.630738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.644616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.644636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.658766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.658787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.672361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.672381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.686217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.686238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.699865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.699886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.713817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.713840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.727806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.727827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.741385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.741406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.754898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.754919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.768738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.644 [2024-12-09 10:54:11.768759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.644 [2024-12-09 10:54:11.782962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.645 [2024-12-09 10:54:11.782983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.645 [2024-12-09 10:54:11.797479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.645 [2024-12-09 10:54:11.797500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.645 [2024-12-09 10:54:11.812665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.645 [2024-12-09 10:54:11.812687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.906 [2024-12-09 10:54:11.826750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.906 [2024-12-09 10:54:11.826772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.906 [2024-12-09 10:54:11.840469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.906 [2024-12-09 10:54:11.840490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.906 [2024-12-09 10:54:11.854246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.906 [2024-12-09 10:54:11.854267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.906 [2024-12-09 10:54:11.868286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.906 [2024-12-09 10:54:11.868307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.906 [2024-12-09 10:54:11.882110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.906 [2024-12-09 10:54:11.882131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.906 [2024-12-09 10:54:11.896434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:11.896454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:11.907510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:11.907530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:11.921793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:11.921814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:11.935672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:11.935692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:11.949696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:11.949717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:11.960762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:11.960782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:11.975707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:11.975727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:11.991719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:11.991740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 16580.00 IOPS, 129.53 MiB/s 00:43:10.907 Latency(us) 00:43:10.907 [2024-12-09T09:54:12.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:10.907 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:43:10.907 Nvme1n1 : 5.01 16586.53 129.58 0.00 0.00 7710.26 2977.61 15728.64 00:43:10.907 [2024-12-09T09:54:12.083Z] =================================================================================================================== 00:43:10.907 [2024-12-09T09:54:12.083Z] Total : 16586.53 129.58 0.00 0.00 7710.26 2977.61 15728.64 00:43:10.907 [2024-12-09 10:54:12.001904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:12.001924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:12.013932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:12.013950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:12.025972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:12.025994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:12.038001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:12.038025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:12.050025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:12.050042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:12.062058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:12.062075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.907 [2024-12-09 10:54:12.074086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.907 [2024-12-09 10:54:12.074105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.086119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.086138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.098150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.098168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.110179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.110196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.122207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.122223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.134238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.134256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.146273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.146290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.158301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.158315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.170332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.170343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.182365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.182380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.194397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.194409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.206426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.206438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.218458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.218471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.230490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.230503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 [2024-12-09 10:54:12.242524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:11.168 [2024-12-09 10:54:12.242538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:11.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2300529) - No such process 00:43:11.168 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2300529 00:43:11.168 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:11.168 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.168 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:11.168 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.168 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:11.168 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.168 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:11.168 delay0 00:43:11.168 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.169 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:43:11.169 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.169 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:11.169 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.169 10:54:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:43:11.428 [2024-12-09 10:54:12.453883] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:43:18.004 Initializing NVMe Controllers 00:43:18.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:18.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:43:18.005 Initialization complete. Launching workers. 00:43:18.005 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 295, failed: 7469 00:43:18.005 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7682, failed to submit 82 00:43:18.005 success 7527, unsuccessful 155, failed 0 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:18.005 rmmod nvme_tcp 00:43:18.005 rmmod nvme_fabrics 00:43:18.005 rmmod nvme_keyring 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2298967 ']' 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2298967 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2298967 ']' 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2298967 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2298967 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2298967' 00:43:18.005 killing process with pid 2298967 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2298967 00:43:18.005 10:54:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2298967 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:18.005 10:54:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:20.544 00:43:20.544 real 0m33.258s 00:43:20.544 user 0m44.684s 00:43:20.544 sys 0m12.037s 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:20.544 ************************************ 00:43:20.544 END TEST nvmf_zcopy 00:43:20.544 ************************************ 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:43:20.544 ************************************ 00:43:20.544 START TEST nvmf_nmic 00:43:20.544 ************************************ 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:43:20.544 * Looking for test storage... 00:43:20.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:20.544 --rc genhtml_branch_coverage=1 00:43:20.544 --rc genhtml_function_coverage=1 00:43:20.544 --rc genhtml_legend=1 00:43:20.544 --rc geninfo_all_blocks=1 00:43:20.544 --rc geninfo_unexecuted_blocks=1 00:43:20.544 00:43:20.544 ' 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:20.544 --rc genhtml_branch_coverage=1 00:43:20.544 --rc genhtml_function_coverage=1 00:43:20.544 --rc genhtml_legend=1 00:43:20.544 --rc geninfo_all_blocks=1 00:43:20.544 --rc geninfo_unexecuted_blocks=1 00:43:20.544 00:43:20.544 ' 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:20.544 --rc genhtml_branch_coverage=1 00:43:20.544 --rc genhtml_function_coverage=1 00:43:20.544 --rc genhtml_legend=1 00:43:20.544 --rc geninfo_all_blocks=1 00:43:20.544 --rc geninfo_unexecuted_blocks=1 00:43:20.544 00:43:20.544 ' 00:43:20.544 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:20.544 --rc genhtml_branch_coverage=1 00:43:20.545 --rc genhtml_function_coverage=1 00:43:20.545 --rc genhtml_legend=1 00:43:20.545 --rc geninfo_all_blocks=1 00:43:20.545 --rc geninfo_unexecuted_blocks=1 00:43:20.545 00:43:20.545 ' 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:20.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:43:20.545 10:54:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:27.124 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:27.125 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:27.125 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:27.125 Found net devices under 0000:af:00.0: cvl_0_0 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:27.125 Found net devices under 0000:af:00.1: cvl_0_1 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:27.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:27.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:43:27.125 00:43:27.125 --- 10.0.0.2 ping statistics --- 00:43:27.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:27.125 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:27.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:27.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:43:27.125 00:43:27.125 --- 10.0.0.1 ping statistics --- 00:43:27.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:27.125 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2305730 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2305730 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2305730 ']' 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:27.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:27.125 10:54:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.125 [2024-12-09 10:54:28.033961] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:43:27.125 [2024-12-09 10:54:28.034039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:27.125 [2024-12-09 10:54:28.167704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:27.125 [2024-12-09 10:54:28.221038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:27.125 [2024-12-09 10:54:28.221091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:27.125 [2024-12-09 10:54:28.221107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:27.125 [2024-12-09 10:54:28.221121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:27.125 [2024-12-09 10:54:28.221133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:27.125 [2024-12-09 10:54:28.222881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:27.125 [2024-12-09 10:54:28.222986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:27.125 [2024-12-09 10:54:28.223067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:27.125 [2024-12-09 10:54:28.223072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.386 [2024-12-09 10:54:28.382883] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.386 Malloc0 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.386 [2024-12-09 10:54:28.455682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:43:27.386 test case1: single bdev can't be used in multiple subsystems 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.386 [2024-12-09 10:54:28.479508] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:43:27.386 [2024-12-09 10:54:28.479539] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:43:27.386 [2024-12-09 10:54:28.479555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.386 request: 00:43:27.386 { 00:43:27.386 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:43:27.386 "namespace": { 00:43:27.386 "bdev_name": "Malloc0", 00:43:27.386 "no_auto_visible": false, 00:43:27.386 "hide_metadata": false 00:43:27.386 }, 00:43:27.386 "method": "nvmf_subsystem_add_ns", 00:43:27.386 "req_id": 1 00:43:27.386 } 00:43:27.386 Got JSON-RPC error response 00:43:27.386 response: 00:43:27.386 { 00:43:27.386 "code": -32602, 00:43:27.386 "message": "Invalid parameters" 00:43:27.386 } 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:43:27.386 Adding namespace failed - expected result. 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:43:27.386 test case2: host connect to nvmf target in multiple paths 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.386 [2024-12-09 10:54:28.491703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.386 10:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:28.326 10:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:43:29.266 10:54:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:43:29.266 10:54:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:43:29.266 10:54:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:29.266 10:54:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:43:29.266 10:54:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:43:31.175 10:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:31.175 10:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:31.175 10:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:31.175 10:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:43:31.175 10:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:31.175 10:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:43:31.175 10:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:31.175 [global] 00:43:31.175 thread=1 00:43:31.175 invalidate=1 00:43:31.175 rw=write 00:43:31.175 time_based=1 00:43:31.175 runtime=1 00:43:31.175 ioengine=libaio 00:43:31.175 direct=1 00:43:31.175 bs=4096 00:43:31.175 iodepth=1 00:43:31.175 norandommap=0 00:43:31.175 numjobs=1 00:43:31.175 00:43:31.175 verify_dump=1 00:43:31.175 verify_backlog=512 00:43:31.175 verify_state_save=0 00:43:31.175 do_verify=1 00:43:31.175 verify=crc32c-intel 00:43:31.175 [job0] 00:43:31.175 filename=/dev/nvme0n1 00:43:31.175 Could not set queue depth (nvme0n1) 00:43:31.434 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:31.434 fio-3.35 00:43:31.434 Starting 1 thread 00:43:32.816 00:43:32.816 job0: (groupid=0, jobs=1): err= 0: pid=2306457: Mon Dec 9 10:54:33 2024 00:43:32.816 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:43:32.816 slat (nsec): min=12008, max=29115, avg=26089.26, stdev=4166.01 00:43:32.816 clat (usec): min=40806, max=41906, avg=41048.77, stdev=277.61 00:43:32.816 lat (usec): min=40834, max=41932, avg=41074.86, stdev=277.73 00:43:32.816 clat percentiles (usec): 00:43:32.816 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:43:32.816 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:32.816 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:43:32.816 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:43:32.816 | 99.99th=[41681] 00:43:32.816 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:43:32.816 slat (nsec): min=12163, max=46288, avg=13368.71, stdev=1787.13 00:43:32.816 clat (usec): min=137, max=305, avg=171.82, stdev=12.37 00:43:32.816 lat (usec): min=150, max=351, avg=185.19, stdev=13.12 00:43:32.816 clat percentiles (usec): 00:43:32.816 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 163], 00:43:32.816 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:43:32.816 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 184], 95.00th=[ 188], 00:43:32.816 | 99.00th=[ 200], 99.50th=[ 202], 99.90th=[ 306], 99.95th=[ 306], 00:43:32.816 | 99.99th=[ 306] 00:43:32.816 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:43:32.816 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:32.816 lat (usec) : 250=95.51%, 500=0.19% 00:43:32.816 lat (msec) : 50=4.30% 00:43:32.816 cpu : usr=0.19%, sys=0.87%, ctx=535, majf=0, minf=1 00:43:32.816 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:32.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.816 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.816 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:32.816 00:43:32.816 Run status group 0 (all jobs): 00:43:32.816 READ: bw=88.4KiB/s (90.5kB/s), 88.4KiB/s-88.4KiB/s (90.5kB/s-90.5kB/s), io=92.0KiB (94.2kB), run=1041-1041msec 00:43:32.816 WRITE: bw=1967KiB/s (2015kB/s), 1967KiB/s-1967KiB/s (2015kB/s-2015kB/s), io=2048KiB (2097kB), run=1041-1041msec 00:43:32.816 00:43:32.816 Disk stats (read/write): 00:43:32.816 nvme0n1: ios=69/512, merge=0/0, ticks=805/87, in_queue=892, util=91.58% 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:32.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:32.816 10:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:32.816 rmmod nvme_tcp 00:43:32.816 rmmod nvme_fabrics 00:43:33.077 rmmod nvme_keyring 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2305730 ']' 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2305730 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2305730 ']' 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2305730 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2305730 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2305730' 00:43:33.077 killing process with pid 2305730 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2305730 00:43:33.077 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2305730 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:33.338 10:54:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:35.881 00:43:35.881 real 0m15.246s 00:43:35.881 user 0m28.898s 00:43:35.881 sys 0m5.877s 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:35.881 ************************************ 00:43:35.881 END TEST nvmf_nmic 00:43:35.881 ************************************ 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:43:35.881 ************************************ 00:43:35.881 START TEST nvmf_fio_target 00:43:35.881 ************************************ 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:43:35.881 * Looking for test storage... 00:43:35.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:43:35.881 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:35.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.882 --rc genhtml_branch_coverage=1 00:43:35.882 --rc genhtml_function_coverage=1 00:43:35.882 --rc genhtml_legend=1 00:43:35.882 --rc geninfo_all_blocks=1 00:43:35.882 --rc geninfo_unexecuted_blocks=1 00:43:35.882 00:43:35.882 ' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:35.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.882 --rc genhtml_branch_coverage=1 00:43:35.882 --rc genhtml_function_coverage=1 00:43:35.882 --rc genhtml_legend=1 00:43:35.882 --rc geninfo_all_blocks=1 00:43:35.882 --rc geninfo_unexecuted_blocks=1 00:43:35.882 00:43:35.882 ' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:35.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.882 --rc genhtml_branch_coverage=1 00:43:35.882 --rc genhtml_function_coverage=1 00:43:35.882 --rc genhtml_legend=1 00:43:35.882 --rc geninfo_all_blocks=1 00:43:35.882 --rc geninfo_unexecuted_blocks=1 00:43:35.882 00:43:35.882 ' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:35.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.882 --rc genhtml_branch_coverage=1 00:43:35.882 --rc genhtml_function_coverage=1 00:43:35.882 --rc genhtml_legend=1 00:43:35.882 --rc geninfo_all_blocks=1 00:43:35.882 --rc geninfo_unexecuted_blocks=1 00:43:35.882 00:43:35.882 ' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:43:35.882 10:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:42.475 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:42.475 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:42.475 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:42.476 Found net devices under 0000:af:00.0: cvl_0_0 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:42.476 Found net devices under 0000:af:00.1: cvl_0_1 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:42.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:42.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:43:42.476 00:43:42.476 --- 10.0.0.2 ping statistics --- 00:43:42.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:42.476 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:42.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:42.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:43:42.476 00:43:42.476 --- 10.0.0.1 ping statistics --- 00:43:42.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:42.476 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2309872 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2309872 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2309872 ']' 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:42.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:42.476 10:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:42.476 [2024-12-09 10:54:43.419755] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:43:42.476 [2024-12-09 10:54:43.419812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:42.476 [2024-12-09 10:54:43.533915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:42.476 [2024-12-09 10:54:43.590057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:42.476 [2024-12-09 10:54:43.590103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:42.476 [2024-12-09 10:54:43.590119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:42.476 [2024-12-09 10:54:43.590133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:42.476 [2024-12-09 10:54:43.590145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:42.476 [2024-12-09 10:54:43.591965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:42.476 [2024-12-09 10:54:43.592067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:42.476 [2024-12-09 10:54:43.592143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:42.476 [2024-12-09 10:54:43.592148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:43.415 10:54:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:43.415 10:54:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:43:43.415 10:54:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:43.415 10:54:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:43.415 10:54:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:43.415 10:54:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:43.415 10:54:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:43.415 [2024-12-09 10:54:44.589538] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:43.675 10:54:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:43.675 10:54:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:43.675 10:54:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:43.935 10:54:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:43.935 10:54:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:44.195 10:54:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:44.195 10:54:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:44.455 10:54:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:44.455 10:54:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:44.714 10:54:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:44.974 10:54:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:44.974 10:54:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:45.233 10:54:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:45.233 10:54:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:45.492 10:54:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:45.492 10:54:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:45.750 10:54:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:46.009 10:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:46.009 10:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:46.268 10:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:46.268 10:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:46.526 10:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:46.786 [2024-12-09 10:54:47.807345] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:46.786 10:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:47.045 10:54:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:47.304 10:54:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:48.243 10:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:48.243 10:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:43:48.243 10:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:48.243 10:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:43:48.243 10:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:43:48.243 10:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:43:50.162 10:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:50.162 10:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:50.162 10:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:50.422 10:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:43:50.422 10:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:50.422 10:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:43:50.422 10:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:50.422 [global] 00:43:50.422 thread=1 00:43:50.422 invalidate=1 00:43:50.422 rw=write 00:43:50.422 time_based=1 00:43:50.422 runtime=1 00:43:50.422 ioengine=libaio 00:43:50.422 direct=1 00:43:50.422 bs=4096 00:43:50.422 iodepth=1 00:43:50.422 norandommap=0 00:43:50.422 numjobs=1 00:43:50.422 00:43:50.422 verify_dump=1 00:43:50.422 verify_backlog=512 00:43:50.422 verify_state_save=0 00:43:50.422 do_verify=1 00:43:50.422 verify=crc32c-intel 00:43:50.422 [job0] 00:43:50.422 filename=/dev/nvme0n1 00:43:50.422 [job1] 00:43:50.422 filename=/dev/nvme0n2 00:43:50.422 [job2] 00:43:50.422 filename=/dev/nvme0n3 00:43:50.422 [job3] 00:43:50.422 filename=/dev/nvme0n4 00:43:50.422 Could not set queue depth (nvme0n1) 00:43:50.422 Could not set queue depth (nvme0n2) 00:43:50.422 Could not set queue depth (nvme0n3) 00:43:50.422 Could not set queue depth (nvme0n4) 00:43:50.682 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:50.682 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:50.682 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:50.682 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:50.682 fio-3.35 00:43:50.683 Starting 4 threads 00:43:52.063 00:43:52.063 job0: (groupid=0, jobs=1): err= 0: pid=2311113: Mon Dec 9 10:54:52 2024 00:43:52.063 read: IOPS=1326, BW=5307KiB/s (5434kB/s)(5312KiB/1001msec) 00:43:52.063 slat (nsec): min=11532, max=28864, avg=12519.86, stdev=1460.74 00:43:52.063 clat (usec): min=259, max=1922, avg=411.26, stdev=89.21 00:43:52.063 lat (usec): min=271, max=1934, avg=423.78, stdev=89.12 00:43:52.063 clat percentiles (usec): 00:43:52.063 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 326], 20.00th=[ 347], 00:43:52.063 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 424], 00:43:52.063 | 70.00th=[ 453], 80.00th=[ 486], 90.00th=[ 523], 95.00th=[ 553], 00:43:52.063 | 99.00th=[ 594], 99.50th=[ 627], 99.90th=[ 938], 99.95th=[ 1926], 00:43:52.063 | 99.99th=[ 1926] 00:43:52.063 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:43:52.063 slat (nsec): min=15392, max=58619, avg=17191.88, stdev=3016.54 00:43:52.063 clat (usec): min=173, max=1692, avg=261.11, stdev=102.60 00:43:52.063 lat (usec): min=190, max=1708, avg=278.31, stdev=102.88 00:43:52.063 clat percentiles (usec): 00:43:52.063 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:43:52.063 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 239], 60.00th=[ 277], 00:43:52.063 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 00:43:52.063 | 99.00th=[ 388], 99.50th=[ 947], 99.90th=[ 1663], 99.95th=[ 1696], 00:43:52.063 | 99.99th=[ 1696] 00:43:52.063 bw ( KiB/s): min= 8192, max= 8192, per=32.01%, avg=8192.00, stdev= 0.00, samples=1 00:43:52.063 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:52.063 lat (usec) : 250=28.67%, 500=64.07%, 750=6.88%, 1000=0.10% 00:43:52.063 lat (msec) : 2=0.28% 00:43:52.063 cpu : usr=3.10%, sys=6.70%, ctx=2866, majf=0, minf=1 00:43:52.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:52.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.063 issued rwts: total=1328,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:52.063 job1: (groupid=0, jobs=1): err= 0: pid=2311114: Mon Dec 9 10:54:52 2024 00:43:52.063 read: IOPS=1323, BW=5295KiB/s (5422kB/s)(5300KiB/1001msec) 00:43:52.063 slat (nsec): min=8909, max=31974, avg=11151.31, stdev=1794.75 00:43:52.063 clat (usec): min=247, max=2091, avg=418.65, stdev=112.07 00:43:52.063 lat (usec): min=261, max=2112, avg=429.80, stdev=112.22 00:43:52.063 clat percentiles (usec): 00:43:52.063 | 1.00th=[ 297], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 347], 00:43:52.063 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 388], 60.00th=[ 416], 00:43:52.063 | 70.00th=[ 445], 80.00th=[ 490], 90.00th=[ 553], 95.00th=[ 594], 00:43:52.063 | 99.00th=[ 635], 99.50th=[ 889], 99.90th=[ 1893], 99.95th=[ 2089], 00:43:52.063 | 99.99th=[ 2089] 00:43:52.063 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:43:52.063 slat (nsec): min=13092, max=53795, avg=14670.11, stdev=2391.69 00:43:52.063 clat (usec): min=162, max=2063, avg=259.90, stdev=90.28 00:43:52.063 lat (usec): min=179, max=2076, avg=274.57, stdev=90.48 00:43:52.063 clat percentiles (usec): 00:43:52.063 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 208], 00:43:52.063 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 243], 60.00th=[ 273], 00:43:52.063 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 00:43:52.063 | 99.00th=[ 363], 99.50th=[ 449], 99.90th=[ 1614], 99.95th=[ 2057], 00:43:52.063 | 99.99th=[ 2057] 00:43:52.063 bw ( KiB/s): min= 8192, max= 8192, per=32.01%, avg=8192.00, stdev= 0.00, samples=1 00:43:52.063 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:52.063 lat (usec) : 250=29.05%, 500=63.05%, 750=7.41%, 1000=0.21% 00:43:52.063 lat (msec) : 2=0.21%, 4=0.07% 00:43:52.063 cpu : usr=4.40%, sys=4.50%, ctx=2861, majf=0, minf=2 00:43:52.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:52.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.063 issued rwts: total=1325,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:52.063 job2: (groupid=0, jobs=1): err= 0: pid=2311115: Mon Dec 9 10:54:52 2024 00:43:52.063 read: IOPS=903, BW=3612KiB/s (3699kB/s)(3616KiB/1001msec) 00:43:52.063 slat (nsec): min=10833, max=52633, avg=13123.63, stdev=4505.26 00:43:52.063 clat (usec): min=200, max=40995, avg=757.36, stdev=3804.12 00:43:52.063 lat (usec): min=236, max=41012, avg=770.48, stdev=3804.61 00:43:52.063 clat percentiles (usec): 00:43:52.063 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 262], 00:43:52.063 | 30.00th=[ 273], 40.00th=[ 318], 50.00th=[ 408], 60.00th=[ 424], 00:43:52.063 | 70.00th=[ 449], 80.00th=[ 529], 90.00th=[ 586], 95.00th=[ 603], 00:43:52.063 | 99.00th=[ 1893], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:52.063 | 99.99th=[41157] 00:43:52.063 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:43:52.063 slat (usec): min=14, max=766, avg=18.38, stdev=24.05 00:43:52.063 clat (usec): min=162, max=2253, avg=271.33, stdev=114.22 00:43:52.063 lat (usec): min=186, max=2267, avg=289.71, stdev=115.85 00:43:52.063 clat percentiles (usec): 00:43:52.063 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 212], 00:43:52.063 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 277], 60.00th=[ 293], 00:43:52.063 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 334], 00:43:52.063 | 99.00th=[ 396], 99.50th=[ 922], 99.90th=[ 1614], 99.95th=[ 2245], 00:43:52.063 | 99.99th=[ 2245] 00:43:52.063 bw ( KiB/s): min= 4096, max= 4096, per=16.01%, avg=4096.00, stdev= 0.00, samples=1 00:43:52.063 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:52.063 lat (usec) : 250=28.32%, 500=60.37%, 750=10.27%, 1000=0.10% 00:43:52.063 lat (msec) : 2=0.41%, 4=0.10%, 50=0.41% 00:43:52.063 cpu : usr=1.90%, sys=4.40%, ctx=1930, majf=0, minf=1 00:43:52.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:52.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.064 issued rwts: total=904,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:52.064 job3: (groupid=0, jobs=1): err= 0: pid=2311116: Mon Dec 9 10:54:52 2024 00:43:52.064 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:43:52.064 slat (nsec): min=10740, max=40929, avg=11824.86, stdev=2247.95 00:43:52.064 clat (usec): min=194, max=3591, avg=246.05, stdev=109.13 00:43:52.064 lat (usec): min=205, max=3602, avg=257.87, stdev=109.13 00:43:52.064 clat percentiles (usec): 00:43:52.064 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:43:52.064 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:43:52.064 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 262], 00:43:52.064 | 99.00th=[ 273], 99.50th=[ 375], 99.90th=[ 1614], 99.95th=[ 3523], 00:43:52.064 | 99.99th=[ 3589] 00:43:52.064 write: IOPS=2305, BW=9223KiB/s (9444kB/s)(9232KiB/1001msec); 0 zone resets 00:43:52.064 slat (nsec): min=13973, max=73155, avg=17323.25, stdev=5623.52 00:43:52.064 clat (usec): min=121, max=3657, avg=180.69, stdev=95.50 00:43:52.064 lat (usec): min=146, max=3673, avg=198.02, stdev=95.83 00:43:52.064 clat percentiles (usec): 00:43:52.064 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:43:52.064 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:43:52.064 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 210], 95.00th=[ 229], 00:43:52.064 | 99.00th=[ 262], 99.50th=[ 281], 99.90th=[ 1696], 99.95th=[ 1942], 00:43:52.064 | 99.99th=[ 3654] 00:43:52.064 bw ( KiB/s): min= 8920, max= 8920, per=34.86%, avg=8920.00, stdev= 0.00, samples=1 00:43:52.064 iops : min= 2230, max= 2230, avg=2230.00, stdev= 0.00, samples=1 00:43:52.064 lat (usec) : 250=88.59%, 500=11.18%, 750=0.07% 00:43:52.064 lat (msec) : 2=0.09%, 4=0.07% 00:43:52.064 cpu : usr=5.80%, sys=8.20%, ctx=4357, majf=0, minf=1 00:43:52.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:52.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.064 issued rwts: total=2048,2308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:52.064 00:43:52.064 Run status group 0 (all jobs): 00:43:52.064 READ: bw=21.9MiB/s (22.9MB/s), 3612KiB/s-8184KiB/s (3699kB/s-8380kB/s), io=21.9MiB (23.0MB), run=1001-1001msec 00:43:52.064 WRITE: bw=25.0MiB/s (26.2MB/s), 4092KiB/s-9223KiB/s (4190kB/s-9444kB/s), io=25.0MiB (26.2MB), run=1001-1001msec 00:43:52.064 00:43:52.064 Disk stats (read/write): 00:43:52.064 nvme0n1: ios=1073/1509, merge=0/0, ticks=582/367, in_queue=949, util=84.27% 00:43:52.064 nvme0n2: ios=1074/1503, merge=0/0, ticks=460/368, in_queue=828, util=89.25% 00:43:52.064 nvme0n3: ios=651/1024, merge=0/0, ticks=880/257, in_queue=1137, util=92.28% 00:43:52.064 nvme0n4: ios=1655/2048, merge=0/0, ticks=562/354, in_queue=916, util=94.34% 00:43:52.064 10:54:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:52.064 [global] 00:43:52.064 thread=1 00:43:52.064 invalidate=1 00:43:52.064 rw=randwrite 00:43:52.064 time_based=1 00:43:52.064 runtime=1 00:43:52.064 ioengine=libaio 00:43:52.064 direct=1 00:43:52.064 bs=4096 00:43:52.064 iodepth=1 00:43:52.064 norandommap=0 00:43:52.064 numjobs=1 00:43:52.064 00:43:52.064 verify_dump=1 00:43:52.064 verify_backlog=512 00:43:52.064 verify_state_save=0 00:43:52.064 do_verify=1 00:43:52.064 verify=crc32c-intel 00:43:52.064 [job0] 00:43:52.064 filename=/dev/nvme0n1 00:43:52.064 [job1] 00:43:52.064 filename=/dev/nvme0n2 00:43:52.064 [job2] 00:43:52.064 filename=/dev/nvme0n3 00:43:52.064 [job3] 00:43:52.064 filename=/dev/nvme0n4 00:43:52.064 Could not set queue depth (nvme0n1) 00:43:52.064 Could not set queue depth (nvme0n2) 00:43:52.064 Could not set queue depth (nvme0n3) 00:43:52.064 Could not set queue depth (nvme0n4) 00:43:52.323 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:52.323 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:52.323 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:52.323 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:52.323 fio-3.35 00:43:52.323 Starting 4 threads 00:43:53.705 00:43:53.705 job0: (groupid=0, jobs=1): err= 0: pid=2311410: Mon Dec 9 10:54:54 2024 00:43:53.705 read: IOPS=33, BW=133KiB/s (136kB/s)(136KiB/1024msec) 00:43:53.705 slat (nsec): min=14511, max=26938, avg=21582.68, stdev=5184.38 00:43:53.705 clat (usec): min=271, max=41028, avg=26497.56, stdev=19643.26 00:43:53.705 lat (usec): min=296, max=41053, avg=26519.14, stdev=19640.26 00:43:53.705 clat percentiles (usec): 00:43:53.705 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 297], 00:43:53.705 | 30.00th=[ 314], 40.00th=[40633], 50.00th=[40633], 60.00th=[40633], 00:43:53.705 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:53.705 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:53.705 | 99.99th=[41157] 00:43:53.705 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:43:53.705 slat (nsec): min=15295, max=61151, avg=17113.55, stdev=3380.32 00:43:53.705 clat (usec): min=168, max=286, avg=215.31, stdev=17.33 00:43:53.705 lat (usec): min=183, max=347, avg=232.42, stdev=18.26 00:43:53.705 clat percentiles (usec): 00:43:53.705 | 1.00th=[ 172], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:43:53.705 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 221], 00:43:53.705 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 237], 95.00th=[ 239], 00:43:53.705 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 285], 00:43:53.705 | 99.99th=[ 285] 00:43:53.705 bw ( KiB/s): min= 4096, max= 4096, per=51.20%, avg=4096.00, stdev= 0.00, samples=1 00:43:53.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:53.705 lat (usec) : 250=92.49%, 500=3.48% 00:43:53.705 lat (msec) : 50=4.03% 00:43:53.705 cpu : usr=0.78%, sys=1.08%, ctx=548, majf=0, minf=1 00:43:53.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.705 issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:53.705 job1: (groupid=0, jobs=1): err= 0: pid=2311411: Mon Dec 9 10:54:54 2024 00:43:53.705 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:43:53.705 slat (nsec): min=13437, max=35930, avg=25606.50, stdev=3718.32 00:43:53.705 clat (usec): min=40860, max=41972, avg=41073.15, stdev=294.04 00:43:53.705 lat (usec): min=40887, max=42007, avg=41098.75, stdev=294.67 00:43:53.705 clat percentiles (usec): 00:43:53.705 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:53.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:53.705 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:43:53.705 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:53.705 | 99.99th=[42206] 00:43:53.705 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:43:53.705 slat (nsec): min=13392, max=48169, avg=15140.27, stdev=2685.08 00:43:53.705 clat (usec): min=144, max=374, avg=188.35, stdev=19.88 00:43:53.705 lat (usec): min=160, max=387, avg=203.49, stdev=20.21 00:43:53.705 clat percentiles (usec): 00:43:53.705 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 176], 00:43:53.705 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:43:53.705 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 219], 00:43:53.705 | 99.00th=[ 233], 99.50th=[ 297], 99.90th=[ 375], 99.95th=[ 375], 00:43:53.705 | 99.99th=[ 375] 00:43:53.705 bw ( KiB/s): min= 4096, max= 4096, per=51.20%, avg=4096.00, stdev= 0.00, samples=1 00:43:53.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:53.705 lat (usec) : 250=95.32%, 500=0.56% 00:43:53.705 lat (msec) : 50=4.12% 00:43:53.705 cpu : usr=0.89%, sys=0.89%, ctx=535, majf=0, minf=1 00:43:53.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.705 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:53.705 job2: (groupid=0, jobs=1): err= 0: pid=2311414: Mon Dec 9 10:54:54 2024 00:43:53.705 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:43:53.705 slat (nsec): min=13262, max=27500, avg=26391.55, stdev=2943.68 00:43:53.705 clat (usec): min=40836, max=41964, avg=41057.25, stdev=298.28 00:43:53.705 lat (usec): min=40863, max=41991, avg=41083.64, stdev=298.12 00:43:53.705 clat percentiles (usec): 00:43:53.705 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:43:53.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:53.705 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:43:53.705 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:53.705 | 99.99th=[42206] 00:43:53.705 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:43:53.705 slat (nsec): min=13376, max=88479, avg=15292.91, stdev=4117.42 00:43:53.705 clat (usec): min=148, max=323, avg=188.18, stdev=17.20 00:43:53.705 lat (usec): min=163, max=412, avg=203.47, stdev=19.09 00:43:53.705 clat percentiles (usec): 00:43:53.705 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:43:53.705 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:43:53.705 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 217], 00:43:53.705 | 99.00th=[ 233], 99.50th=[ 251], 99.90th=[ 326], 99.95th=[ 326], 00:43:53.705 | 99.99th=[ 326] 00:43:53.705 bw ( KiB/s): min= 4096, max= 4096, per=51.20%, avg=4096.00, stdev= 0.00, samples=1 00:43:53.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:53.705 lat (usec) : 250=95.32%, 500=0.56% 00:43:53.705 lat (msec) : 50=4.12% 00:43:53.705 cpu : usr=0.40%, sys=1.29%, ctx=535, majf=0, minf=1 00:43:53.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.705 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:53.705 job3: (groupid=0, jobs=1): err= 0: pid=2311415: Mon Dec 9 10:54:54 2024 00:43:53.705 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:43:53.705 slat (nsec): min=15129, max=27796, avg=26709.05, stdev=2593.09 00:43:53.705 clat (usec): min=40834, max=41052, avg=40959.13, stdev=48.60 00:43:53.705 lat (usec): min=40861, max=41080, avg=40985.84, stdev=49.00 00:43:53.706 clat percentiles (usec): 00:43:53.706 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:53.706 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:53.706 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:53.706 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:53.706 | 99.99th=[41157] 00:43:53.706 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:43:53.706 slat (nsec): min=13314, max=48281, avg=14904.09, stdev=2280.56 00:43:53.706 clat (usec): min=155, max=430, avg=206.47, stdev=31.84 00:43:53.706 lat (usec): min=169, max=447, avg=221.37, stdev=32.28 00:43:53.706 clat percentiles (usec): 00:43:53.706 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:43:53.706 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 217], 00:43:53.706 | 70.00th=[ 235], 80.00th=[ 237], 90.00th=[ 241], 95.00th=[ 245], 00:43:53.706 | 99.00th=[ 293], 99.50th=[ 359], 99.90th=[ 433], 99.95th=[ 433], 00:43:53.706 | 99.99th=[ 433] 00:43:53.706 bw ( KiB/s): min= 4096, max= 4096, per=51.20%, avg=4096.00, stdev= 0.00, samples=1 00:43:53.706 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:53.706 lat (usec) : 250=93.07%, 500=2.81% 00:43:53.706 lat (msec) : 50=4.12% 00:43:53.706 cpu : usr=0.98%, sys=0.69%, ctx=535, majf=0, minf=1 00:43:53.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.706 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:53.706 00:43:53.706 Run status group 0 (all jobs): 00:43:53.706 READ: bw=391KiB/s (400kB/s), 86.5KiB/s-133KiB/s (88.6kB/s-136kB/s), io=400KiB (410kB), run=1010-1024msec 00:43:53.706 WRITE: bw=8000KiB/s (8192kB/s), 2000KiB/s-2028KiB/s (2048kB/s-2076kB/s), io=8192KiB (8389kB), run=1010-1024msec 00:43:53.706 00:43:53.706 Disk stats (read/write): 00:43:53.706 nvme0n1: ios=76/512, merge=0/0, ticks=767/104, in_queue=871, util=85.96% 00:43:53.706 nvme0n2: ios=67/512, merge=0/0, ticks=806/94, in_queue=900, util=91.16% 00:43:53.706 nvme0n3: ios=74/512, merge=0/0, ticks=759/87, in_queue=846, util=90.72% 00:43:53.706 nvme0n4: ios=74/512, merge=0/0, ticks=777/107, in_queue=884, util=95.83% 00:43:53.706 10:54:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:53.706 [global] 00:43:53.706 thread=1 00:43:53.706 invalidate=1 00:43:53.706 rw=write 00:43:53.706 time_based=1 00:43:53.706 runtime=1 00:43:53.706 ioengine=libaio 00:43:53.706 direct=1 00:43:53.706 bs=4096 00:43:53.706 iodepth=128 00:43:53.706 norandommap=0 00:43:53.706 numjobs=1 00:43:53.706 00:43:53.706 verify_dump=1 00:43:53.706 verify_backlog=512 00:43:53.706 verify_state_save=0 00:43:53.706 do_verify=1 00:43:53.706 verify=crc32c-intel 00:43:53.706 [job0] 00:43:53.706 filename=/dev/nvme0n1 00:43:53.706 [job1] 00:43:53.706 filename=/dev/nvme0n2 00:43:53.706 [job2] 00:43:53.706 filename=/dev/nvme0n3 00:43:53.706 [job3] 00:43:53.706 filename=/dev/nvme0n4 00:43:53.706 Could not set queue depth (nvme0n1) 00:43:53.706 Could not set queue depth (nvme0n2) 00:43:53.706 Could not set queue depth (nvme0n3) 00:43:53.706 Could not set queue depth (nvme0n4) 00:43:53.972 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:53.972 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:53.972 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:53.972 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:53.972 fio-3.35 00:43:53.972 Starting 4 threads 00:43:55.352 00:43:55.352 job0: (groupid=0, jobs=1): err= 0: pid=2311712: Mon Dec 9 10:54:56 2024 00:43:55.352 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:43:55.352 slat (usec): min=2, max=10205, avg=117.56, stdev=690.47 00:43:55.352 clat (usec): min=3982, max=69151, avg=14846.77, stdev=7144.48 00:43:55.352 lat (usec): min=3986, max=69161, avg=14964.33, stdev=7199.52 00:43:55.352 clat percentiles (usec): 00:43:55.352 | 1.00th=[ 6718], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11731], 00:43:55.352 | 30.00th=[12780], 40.00th=[13566], 50.00th=[13829], 60.00th=[14222], 00:43:55.352 | 70.00th=[14615], 80.00th=[15533], 90.00th=[17171], 95.00th=[21103], 00:43:55.352 | 99.00th=[58459], 99.50th=[63701], 99.90th=[68682], 99.95th=[68682], 00:43:55.352 | 99.99th=[68682] 00:43:55.352 write: IOPS=4111, BW=16.1MiB/s (16.8MB/s)(16.1MiB/1003msec); 0 zone resets 00:43:55.352 slat (usec): min=3, max=12227, avg=110.45, stdev=630.75 00:43:55.352 clat (usec): min=269, max=84138, avg=15622.15, stdev=9397.29 00:43:55.352 lat (usec): min=981, max=84145, avg=15732.60, stdev=9429.67 00:43:55.352 clat percentiles (usec): 00:43:55.352 | 1.00th=[ 3752], 5.00th=[ 6783], 10.00th=[10290], 20.00th=[12518], 00:43:55.352 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:43:55.352 | 70.00th=[13698], 80.00th=[15664], 90.00th=[22676], 95.00th=[34341], 00:43:55.352 | 99.00th=[61080], 99.50th=[67634], 99.90th=[84411], 99.95th=[84411], 00:43:55.352 | 99.99th=[84411] 00:43:55.352 bw ( KiB/s): min=15168, max=17600, per=24.64%, avg=16384.00, stdev=1719.68, samples=2 00:43:55.352 iops : min= 3792, max= 4400, avg=4096.00, stdev=429.92, samples=2 00:43:55.352 lat (usec) : 500=0.01%, 1000=0.06% 00:43:55.352 lat (msec) : 2=0.06%, 4=1.02%, 10=7.04%, 20=80.86%, 50=9.23% 00:43:55.352 lat (msec) : 100=1.70% 00:43:55.352 cpu : usr=4.99%, sys=6.79%, ctx=388, majf=0, minf=1 00:43:55.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:55.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:55.352 issued rwts: total=4096,4124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:55.352 job1: (groupid=0, jobs=1): err= 0: pid=2311713: Mon Dec 9 10:54:56 2024 00:43:55.352 read: IOPS=5028, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1003msec) 00:43:55.352 slat (nsec): min=1954, max=17618k, avg=91974.59, stdev=546400.34 00:43:55.352 clat (usec): min=491, max=49325, avg=11010.31, stdev=3286.97 00:43:55.352 lat (usec): min=3315, max=49330, avg=11102.28, stdev=3324.07 00:43:55.352 clat percentiles (usec): 00:43:55.352 | 1.00th=[ 5538], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9634], 00:43:55.352 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:43:55.352 | 70.00th=[11076], 80.00th=[11469], 90.00th=[13829], 95.00th=[16057], 00:43:55.352 | 99.00th=[23200], 99.50th=[36963], 99.90th=[49546], 99.95th=[49546], 00:43:55.352 | 99.99th=[49546] 00:43:55.352 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:43:55.352 slat (usec): min=2, max=11987, avg=99.61, stdev=635.46 00:43:55.352 clat (usec): min=4604, max=53088, avg=13681.51, stdev=7967.72 00:43:55.352 lat (usec): min=4722, max=53103, avg=13781.12, stdev=8003.86 00:43:55.352 clat percentiles (usec): 00:43:55.352 | 1.00th=[ 6456], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[ 9765], 00:43:55.352 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10945], 00:43:55.352 | 70.00th=[12649], 80.00th=[16188], 90.00th=[20841], 95.00th=[31589], 00:43:55.352 | 99.00th=[52167], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:43:55.352 | 99.99th=[53216] 00:43:55.352 bw ( KiB/s): min=16384, max=24576, per=30.80%, avg=20480.00, stdev=5792.62, samples=2 00:43:55.352 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:43:55.352 lat (usec) : 500=0.01% 00:43:55.352 lat (msec) : 4=0.31%, 10=28.06%, 20=64.74%, 50=6.26%, 100=0.62% 00:43:55.352 cpu : usr=4.59%, sys=4.29%, ctx=526, majf=0, minf=1 00:43:55.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:55.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:55.352 issued rwts: total=5044,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:55.352 job2: (groupid=0, jobs=1): err= 0: pid=2311714: Mon Dec 9 10:54:56 2024 00:43:55.352 read: IOPS=3491, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1004msec) 00:43:55.352 slat (usec): min=2, max=10039, avg=113.32, stdev=733.00 00:43:55.352 clat (usec): min=1845, max=28275, avg=15480.66, stdev=4238.57 00:43:55.352 lat (usec): min=2224, max=28304, avg=15593.97, stdev=4301.92 00:43:55.352 clat percentiles (usec): 00:43:55.352 | 1.00th=[ 5276], 5.00th=[ 6849], 10.00th=[ 9372], 20.00th=[12518], 00:43:55.352 | 30.00th=[13042], 40.00th=[15139], 50.00th=[16450], 60.00th=[17957], 00:43:55.352 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19792], 95.00th=[21103], 00:43:55.352 | 99.00th=[22414], 99.50th=[23200], 99.90th=[26608], 99.95th=[28181], 00:43:55.352 | 99.99th=[28181] 00:43:55.352 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:43:55.352 slat (usec): min=4, max=15710, avg=147.99, stdev=772.36 00:43:55.352 clat (usec): min=2039, max=54947, avg=20274.74, stdev=8535.35 00:43:55.352 lat (usec): min=2045, max=54953, avg=20422.73, stdev=8588.62 00:43:55.352 clat percentiles (usec): 00:43:55.352 | 1.00th=[ 4490], 5.00th=[ 8094], 10.00th=[10814], 20.00th=[14222], 00:43:55.352 | 30.00th=[15795], 40.00th=[17695], 50.00th=[20055], 60.00th=[21890], 00:43:55.352 | 70.00th=[22676], 80.00th=[24773], 90.00th=[26608], 95.00th=[34341], 00:43:55.352 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:43:55.352 | 99.99th=[54789] 00:43:55.352 bw ( KiB/s): min=12296, max=16376, per=21.56%, avg=14336.00, stdev=2885.00, samples=2 00:43:55.352 iops : min= 3074, max= 4094, avg=3584.00, stdev=721.25, samples=2 00:43:55.352 lat (msec) : 2=0.01%, 4=0.38%, 10=9.10%, 20=59.90%, 50=29.40% 00:43:55.352 lat (msec) : 100=1.21% 00:43:55.352 cpu : usr=4.59%, sys=5.78%, ctx=376, majf=0, minf=1 00:43:55.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:43:55.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:55.352 issued rwts: total=3505,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:55.352 job3: (groupid=0, jobs=1): err= 0: pid=2311715: Mon Dec 9 10:54:56 2024 00:43:55.352 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:43:55.352 slat (usec): min=3, max=13599, avg=145.58, stdev=938.67 00:43:55.352 clat (usec): min=4851, max=49410, avg=19239.37, stdev=9063.22 00:43:55.352 lat (usec): min=4867, max=49439, avg=19384.95, stdev=9155.64 00:43:55.352 clat percentiles (usec): 00:43:55.352 | 1.00th=[ 7373], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[10945], 00:43:55.352 | 30.00th=[11731], 40.00th=[13829], 50.00th=[17171], 60.00th=[20317], 00:43:55.352 | 70.00th=[24249], 80.00th=[27657], 90.00th=[33424], 95.00th=[36439], 00:43:55.352 | 99.00th=[41681], 99.50th=[41681], 99.90th=[46400], 99.95th=[47973], 00:43:55.352 | 99.99th=[49546] 00:43:55.352 write: IOPS=3861, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1005msec); 0 zone resets 00:43:55.352 slat (usec): min=4, max=11116, avg=105.67, stdev=507.71 00:43:55.352 clat (usec): min=1019, max=43319, avg=15082.71, stdev=7939.47 00:43:55.352 lat (usec): min=1955, max=43332, avg=15188.37, stdev=7977.54 00:43:55.352 clat percentiles (usec): 00:43:55.352 | 1.00th=[ 4752], 5.00th=[ 7570], 10.00th=[ 8356], 20.00th=[ 8979], 00:43:55.352 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[12780], 00:43:55.352 | 70.00th=[18220], 80.00th=[22152], 90.00th=[25297], 95.00th=[33424], 00:43:55.352 | 99.00th=[39060], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:43:55.352 | 99.99th=[43254] 00:43:55.352 bw ( KiB/s): min= 9544, max=20480, per=22.57%, avg=15012.00, stdev=7732.92, samples=2 00:43:55.352 iops : min= 2386, max= 5120, avg=3753.00, stdev=1933.23, samples=2 00:43:55.352 lat (msec) : 2=0.09%, 10=18.97%, 20=45.89%, 50=35.04% 00:43:55.352 cpu : usr=5.88%, sys=6.18%, ctx=458, majf=0, minf=1 00:43:55.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:55.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:55.352 issued rwts: total=3584,3881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:55.352 00:43:55.352 Run status group 0 (all jobs): 00:43:55.352 READ: bw=63.1MiB/s (66.1MB/s), 13.6MiB/s-19.6MiB/s (14.3MB/s-20.6MB/s), io=63.4MiB (66.5MB), run=1003-1005msec 00:43:55.352 WRITE: bw=64.9MiB/s (68.1MB/s), 13.9MiB/s-19.9MiB/s (14.6MB/s-20.9MB/s), io=65.3MiB (68.4MB), run=1003-1005msec 00:43:55.352 00:43:55.352 Disk stats (read/write): 00:43:55.352 nvme0n1: ios=3125/3584, merge=0/0, ticks=20231/23411, in_queue=43642, util=85.07% 00:43:55.352 nvme0n2: ios=4136/4183, merge=0/0, ticks=13486/15010, in_queue=28496, util=99.90% 00:43:55.352 nvme0n3: ios=2677/3072, merge=0/0, ticks=24733/36596, in_queue=61329, util=88.17% 00:43:55.352 nvme0n4: ios=3072/3431, merge=0/0, ticks=33085/32495, in_queue=65580, util=88.89% 00:43:55.352 10:54:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:55.352 [global] 00:43:55.353 thread=1 00:43:55.353 invalidate=1 00:43:55.353 rw=randwrite 00:43:55.353 time_based=1 00:43:55.353 runtime=1 00:43:55.353 ioengine=libaio 00:43:55.353 direct=1 00:43:55.353 bs=4096 00:43:55.353 iodepth=128 00:43:55.353 norandommap=0 00:43:55.353 numjobs=1 00:43:55.353 00:43:55.353 verify_dump=1 00:43:55.353 verify_backlog=512 00:43:55.353 verify_state_save=0 00:43:55.353 do_verify=1 00:43:55.353 verify=crc32c-intel 00:43:55.353 [job0] 00:43:55.353 filename=/dev/nvme0n1 00:43:55.353 [job1] 00:43:55.353 filename=/dev/nvme0n2 00:43:55.353 [job2] 00:43:55.353 filename=/dev/nvme0n3 00:43:55.353 [job3] 00:43:55.353 filename=/dev/nvme0n4 00:43:55.353 Could not set queue depth (nvme0n1) 00:43:55.353 Could not set queue depth (nvme0n2) 00:43:55.353 Could not set queue depth (nvme0n3) 00:43:55.353 Could not set queue depth (nvme0n4) 00:43:55.612 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:55.612 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:55.612 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:55.612 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:55.612 fio-3.35 00:43:55.612 Starting 4 threads 00:43:56.994 00:43:56.994 job0: (groupid=0, jobs=1): err= 0: pid=2312010: Mon Dec 9 10:54:57 2024 00:43:56.994 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:43:56.994 slat (usec): min=2, max=23373, avg=90.22, stdev=753.97 00:43:56.994 clat (usec): min=2147, max=87043, avg=13467.64, stdev=10407.18 00:43:56.994 lat (usec): min=2151, max=87054, avg=13557.86, stdev=10452.73 00:43:56.994 clat percentiles (usec): 00:43:56.994 | 1.00th=[ 3851], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8848], 00:43:56.994 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:43:56.994 | 70.00th=[12911], 80.00th=[15270], 90.00th=[17433], 95.00th=[23987], 00:43:56.994 | 99.00th=[73925], 99.50th=[79168], 99.90th=[86508], 99.95th=[86508], 00:43:56.994 | 99.99th=[87557] 00:43:56.994 write: IOPS=4919, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1005msec); 0 zone resets 00:43:56.994 slat (usec): min=3, max=28030, avg=99.30, stdev=694.59 00:43:56.994 clat (usec): min=754, max=50542, avg=13204.49, stdev=9102.80 00:43:56.994 lat (usec): min=787, max=50552, avg=13303.79, stdev=9153.38 00:43:56.994 clat percentiles (usec): 00:43:56.994 | 1.00th=[ 2057], 5.00th=[ 4686], 10.00th=[ 7111], 20.00th=[ 8356], 00:43:56.994 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[10814], 60.00th=[11076], 00:43:56.994 | 70.00th=[11600], 80.00th=[13829], 90.00th=[28967], 95.00th=[37487], 00:43:56.994 | 99.00th=[43779], 99.50th=[47449], 99.90th=[50594], 99.95th=[50594], 00:43:56.994 | 99.99th=[50594] 00:43:56.994 bw ( KiB/s): min=18000, max=20528, per=27.62%, avg=19264.00, stdev=1787.57, samples=2 00:43:56.994 iops : min= 4500, max= 5132, avg=4816.00, stdev=446.89, samples=2 00:43:56.994 lat (usec) : 1000=0.02% 00:43:56.994 lat (msec) : 2=0.17%, 4=2.13%, 10=35.61%, 20=51.94%, 50=8.75% 00:43:56.994 lat (msec) : 100=1.39% 00:43:56.994 cpu : usr=4.78%, sys=6.57%, ctx=384, majf=0, minf=1 00:43:56.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:43:56.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:56.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:56.994 issued rwts: total=4608,4944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:56.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:56.994 job1: (groupid=0, jobs=1): err= 0: pid=2312017: Mon Dec 9 10:54:57 2024 00:43:56.994 read: IOPS=4105, BW=16.0MiB/s (16.8MB/s)(16.2MiB/1009msec) 00:43:56.994 slat (usec): min=3, max=14665, avg=97.72, stdev=661.56 00:43:56.994 clat (usec): min=4242, max=32167, avg=12729.03, stdev=4389.50 00:43:56.994 lat (usec): min=4252, max=32210, avg=12826.75, stdev=4430.80 00:43:56.994 clat percentiles (usec): 00:43:56.994 | 1.00th=[ 7439], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9765], 00:43:56.994 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:43:56.994 | 70.00th=[14353], 80.00th=[15401], 90.00th=[19006], 95.00th=[21627], 00:43:56.994 | 99.00th=[28967], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:43:56.994 | 99.99th=[32113] 00:43:56.994 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:43:56.994 slat (usec): min=4, max=10774, avg=117.16, stdev=637.42 00:43:56.994 clat (usec): min=3912, max=61794, avg=16188.51, stdev=11513.17 00:43:56.994 lat (usec): min=3926, max=61801, avg=16305.67, stdev=11584.40 00:43:56.994 clat percentiles (usec): 00:43:56.994 | 1.00th=[ 4621], 5.00th=[ 7242], 10.00th=[ 8094], 20.00th=[ 9634], 00:43:56.994 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11207], 60.00th=[13304], 00:43:56.994 | 70.00th=[15139], 80.00th=[19792], 90.00th=[37487], 95.00th=[44303], 00:43:56.994 | 99.00th=[55837], 99.50th=[60556], 99.90th=[61604], 99.95th=[61604], 00:43:56.994 | 99.99th=[61604] 00:43:56.994 bw ( KiB/s): min=16824, max=19384, per=25.96%, avg=18104.00, stdev=1810.19, samples=2 00:43:56.994 iops : min= 4206, max= 4846, avg=4526.00, stdev=452.55, samples=2 00:43:56.994 lat (msec) : 4=0.17%, 10=25.17%, 20=61.06%, 50=12.63%, 100=0.97% 00:43:56.994 cpu : usr=5.36%, sys=9.42%, ctx=383, majf=0, minf=1 00:43:56.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:56.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:56.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:56.994 issued rwts: total=4142,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:56.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:56.995 job2: (groupid=0, jobs=1): err= 0: pid=2312027: Mon Dec 9 10:54:57 2024 00:43:56.995 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:43:56.995 slat (usec): min=2, max=9248, avg=115.69, stdev=709.45 00:43:56.995 clat (usec): min=5455, max=68230, avg=14847.73, stdev=5313.11 00:43:56.995 lat (usec): min=5466, max=68238, avg=14963.42, stdev=5346.05 00:43:56.995 clat percentiles (usec): 00:43:56.995 | 1.00th=[ 7439], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[11731], 00:43:56.995 | 30.00th=[12256], 40.00th=[13435], 50.00th=[14222], 60.00th=[15008], 00:43:56.995 | 70.00th=[15795], 80.00th=[17433], 90.00th=[19530], 95.00th=[22414], 00:43:56.995 | 99.00th=[34341], 99.50th=[35390], 99.90th=[68682], 99.95th=[68682], 00:43:56.995 | 99.99th=[68682] 00:43:56.995 write: IOPS=3968, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1004msec); 0 zone resets 00:43:56.995 slat (usec): min=3, max=25365, avg=138.38, stdev=724.50 00:43:56.995 clat (usec): min=1692, max=41572, avg=18555.08, stdev=9354.35 00:43:56.995 lat (usec): min=4123, max=41611, avg=18693.45, stdev=9407.31 00:43:56.995 clat percentiles (usec): 00:43:56.995 | 1.00th=[ 6063], 5.00th=[ 7504], 10.00th=[ 8586], 20.00th=[10945], 00:43:56.995 | 30.00th=[11731], 40.00th=[12780], 50.00th=[15008], 60.00th=[18482], 00:43:56.995 | 70.00th=[24249], 80.00th=[28181], 90.00th=[33162], 95.00th=[36439], 00:43:56.995 | 99.00th=[39584], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:43:56.995 | 99.99th=[41681] 00:43:56.995 bw ( KiB/s): min=14840, max=16008, per=22.12%, avg=15424.00, stdev=825.90, samples=2 00:43:56.995 iops : min= 3710, max= 4002, avg=3856.00, stdev=206.48, samples=2 00:43:56.995 lat (msec) : 2=0.01%, 4=0.01%, 10=11.95%, 20=64.11%, 50=23.73% 00:43:56.995 lat (msec) : 100=0.18% 00:43:56.995 cpu : usr=3.69%, sys=6.08%, ctx=411, majf=0, minf=1 00:43:56.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:56.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:56.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:56.995 issued rwts: total=3584,3984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:56.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:56.995 job3: (groupid=0, jobs=1): err= 0: pid=2312033: Mon Dec 9 10:54:57 2024 00:43:56.995 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:43:56.995 slat (usec): min=3, max=17904, avg=136.57, stdev=785.79 00:43:56.995 clat (usec): min=5265, max=67771, avg=17997.20, stdev=9798.73 00:43:56.995 lat (usec): min=5275, max=67806, avg=18133.77, stdev=9870.31 00:43:56.995 clat percentiles (usec): 00:43:56.995 | 1.00th=[ 7701], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11600], 00:43:56.995 | 30.00th=[12256], 40.00th=[12911], 50.00th=[14484], 60.00th=[16909], 00:43:56.995 | 70.00th=[18744], 80.00th=[22414], 90.00th=[33162], 95.00th=[38536], 00:43:56.995 | 99.00th=[63701], 99.50th=[64750], 99.90th=[64750], 99.95th=[67634], 00:43:56.995 | 99.99th=[67634] 00:43:56.995 write: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(15.8MiB/1003msec); 0 zone resets 00:43:56.995 slat (usec): min=4, max=25279, avg=113.63, stdev=712.57 00:43:56.995 clat (usec): min=765, max=63840, avg=15392.16, stdev=7827.56 00:43:56.995 lat (usec): min=1659, max=63847, avg=15505.79, stdev=7879.61 00:43:56.995 clat percentiles (usec): 00:43:56.995 | 1.00th=[ 3621], 5.00th=[ 7898], 10.00th=[ 9503], 20.00th=[10290], 00:43:56.995 | 30.00th=[11338], 40.00th=[12125], 50.00th=[13042], 60.00th=[14615], 00:43:56.995 | 70.00th=[16057], 80.00th=[17695], 90.00th=[24511], 95.00th=[34866], 00:43:56.995 | 99.00th=[50070], 99.50th=[53740], 99.90th=[53740], 99.95th=[54264], 00:43:56.995 | 99.99th=[63701] 00:43:56.995 bw ( KiB/s): min=15656, max=15768, per=22.53%, avg=15712.00, stdev=79.20, samples=2 00:43:56.995 iops : min= 3914, max= 3942, avg=3928.00, stdev=19.80, samples=2 00:43:56.995 lat (usec) : 1000=0.01% 00:43:56.995 lat (msec) : 2=0.16%, 4=0.59%, 10=11.74%, 20=68.48%, 50=18.04% 00:43:56.995 lat (msec) : 100=0.98% 00:43:56.995 cpu : usr=5.69%, sys=7.39%, ctx=393, majf=0, minf=2 00:43:56.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:56.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:56.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:56.995 issued rwts: total=3584,4056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:56.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:56.995 00:43:56.995 Run status group 0 (all jobs): 00:43:56.995 READ: bw=61.6MiB/s (64.6MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=62.2MiB (65.2MB), run=1003-1009msec 00:43:56.995 WRITE: bw=68.1MiB/s (71.4MB/s), 15.5MiB/s-19.2MiB/s (16.3MB/s-20.1MB/s), io=68.7MiB (72.1MB), run=1003-1009msec 00:43:56.995 00:43:56.995 Disk stats (read/write): 00:43:56.995 nvme0n1: ios=4461/4608, merge=0/0, ticks=33313/23591, in_queue=56904, util=96.19% 00:43:56.995 nvme0n2: ios=3614/4052, merge=0/0, ticks=25391/37845, in_queue=63236, util=96.63% 00:43:56.995 nvme0n3: ios=2792/3072, merge=0/0, ticks=23381/33238, in_queue=56619, util=88.49% 00:43:56.995 nvme0n4: ios=2863/3072, merge=0/0, ticks=22275/19171, in_queue=41446, util=96.15% 00:43:56.995 10:54:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:56.995 10:54:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2312194 00:43:56.995 10:54:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:56.995 10:54:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:56.995 [global] 00:43:56.995 thread=1 00:43:56.995 invalidate=1 00:43:56.995 rw=read 00:43:56.995 time_based=1 00:43:56.995 runtime=10 00:43:56.995 ioengine=libaio 00:43:56.995 direct=1 00:43:56.995 bs=4096 00:43:56.995 iodepth=1 00:43:56.995 norandommap=1 00:43:56.995 numjobs=1 00:43:56.995 00:43:56.995 [job0] 00:43:56.995 filename=/dev/nvme0n1 00:43:56.995 [job1] 00:43:56.995 filename=/dev/nvme0n2 00:43:56.995 [job2] 00:43:56.995 filename=/dev/nvme0n3 00:43:56.995 [job3] 00:43:56.995 filename=/dev/nvme0n4 00:43:56.995 Could not set queue depth (nvme0n1) 00:43:56.995 Could not set queue depth (nvme0n2) 00:43:56.995 Could not set queue depth (nvme0n3) 00:43:56.995 Could not set queue depth (nvme0n4) 00:43:56.995 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:56.995 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:56.995 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:56.995 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:56.995 fio-3.35 00:43:56.995 Starting 4 threads 00:44:00.291 10:55:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:44:00.291 10:55:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:44:00.291 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=278528, buflen=4096 00:44:00.291 fio: pid=2312439, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:00.291 10:55:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:00.291 10:55:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:44:00.291 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=21692416, buflen=4096 00:44:00.291 fio: pid=2312431, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:00.551 10:55:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:00.551 10:55:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:44:00.551 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=25825280, buflen=4096 00:44:00.551 fio: pid=2312393, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:00.811 10:55:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:00.811 10:55:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:44:00.811 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=31305728, buflen=4096 00:44:00.811 fio: pid=2312405, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:00.811 00:44:00.811 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2312393: Mon Dec 9 10:55:01 2024 00:44:00.811 read: IOPS=1920, BW=7680KiB/s (7864kB/s)(24.6MiB/3284msec) 00:44:00.811 slat (usec): min=2, max=11774, avg=13.23, stdev=184.67 00:44:00.811 clat (usec): min=186, max=42049, avg=502.78, stdev=3119.60 00:44:00.811 lat (usec): min=196, max=42076, avg=516.02, stdev=3125.95 00:44:00.811 clat percentiles (usec): 00:44:00.812 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 237], 00:44:00.812 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:44:00.812 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 469], 00:44:00.812 | 99.00th=[ 506], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:44:00.812 | 99.99th=[42206] 00:44:00.812 bw ( KiB/s): min= 96, max=14216, per=37.43%, avg=8125.83, stdev=6464.93, samples=6 00:44:00.812 iops : min= 24, max= 3554, avg=2031.33, stdev=1616.10, samples=6 00:44:00.812 lat (usec) : 250=52.00%, 500=46.57%, 750=0.81% 00:44:00.812 lat (msec) : 2=0.02%, 50=0.59% 00:44:00.812 cpu : usr=0.97%, sys=2.19%, ctx=6308, majf=0, minf=1 00:44:00.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:00.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:00.812 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:00.812 issued rwts: total=6306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:00.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:00.812 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2312405: Mon Dec 9 10:55:01 2024 00:44:00.812 read: IOPS=2147, BW=8590KiB/s (8796kB/s)(29.9MiB/3559msec) 00:44:00.812 slat (usec): min=8, max=19870, avg=17.69, stdev=309.50 00:44:00.812 clat (usec): min=152, max=41974, avg=441.98, stdev=2676.16 00:44:00.812 lat (usec): min=162, max=42000, avg=459.67, stdev=2694.56 00:44:00.812 clat percentiles (usec): 00:44:00.812 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 182], 00:44:00.812 | 30.00th=[ 239], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:44:00.812 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 400], 95.00th=[ 412], 00:44:00.812 | 99.00th=[ 453], 99.50th=[ 1762], 99.90th=[41157], 99.95th=[41681], 00:44:00.812 | 99.99th=[42206] 00:44:00.812 bw ( KiB/s): min= 96, max=13544, per=33.92%, avg=7362.33, stdev=6328.23, samples=6 00:44:00.812 iops : min= 24, max= 3386, avg=1840.50, stdev=1582.15, samples=6 00:44:00.812 lat (usec) : 250=31.80%, 500=67.60%, 750=0.05% 00:44:00.812 lat (msec) : 2=0.05%, 4=0.05%, 50=0.43% 00:44:00.812 cpu : usr=1.60%, sys=3.88%, ctx=7648, majf=0, minf=2 00:44:00.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:00.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:00.812 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:00.812 issued rwts: total=7644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:00.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:00.812 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2312431: Mon Dec 9 10:55:01 2024 00:44:00.812 read: IOPS=1775, BW=7102KiB/s (7272kB/s)(20.7MiB/2983msec) 00:44:00.812 slat (nsec): min=10255, max=41141, avg=11477.21, stdev=2115.07 00:44:00.812 clat (usec): min=204, max=41620, avg=545.63, stdev=3097.67 00:44:00.812 lat (usec): min=214, max=41633, avg=557.10, stdev=3098.47 00:44:00.812 clat percentiles (usec): 00:44:00.812 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:44:00.812 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:44:00.812 | 70.00th=[ 285], 80.00th=[ 445], 90.00th=[ 494], 95.00th=[ 506], 00:44:00.812 | 99.00th=[ 519], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:44:00.812 | 99.99th=[41681] 00:44:00.812 bw ( KiB/s): min= 112, max=15368, per=38.93%, avg=8449.60, stdev=5755.89, samples=5 00:44:00.812 iops : min= 28, max= 3842, avg=2112.40, stdev=1438.97, samples=5 00:44:00.812 lat (usec) : 250=40.68%, 500=52.75%, 750=5.95% 00:44:00.812 lat (msec) : 2=0.02%, 50=0.59% 00:44:00.812 cpu : usr=1.64%, sys=3.45%, ctx=5298, majf=0, minf=2 00:44:00.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:00.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:00.812 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:00.812 issued rwts: total=5297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:00.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:00.812 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2312439: Mon Dec 9 10:55:01 2024 00:44:00.812 read: IOPS=25, BW=100KiB/s (103kB/s)(272KiB/2716msec) 00:44:00.812 slat (nsec): min=14665, max=43784, avg=22090.62, stdev=6621.99 00:44:00.812 clat (usec): min=268, max=41148, avg=39779.06, stdev=6914.63 00:44:00.812 lat (usec): min=284, max=41163, avg=39801.08, stdev=6913.27 00:44:00.812 clat percentiles (usec): 00:44:00.812 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:44:00.812 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:00.812 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:44:00.812 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:44:00.812 | 99.99th=[41157] 00:44:00.812 bw ( KiB/s): min= 96, max= 120, per=0.46%, avg=100.80, stdev=10.73, samples=5 00:44:00.812 iops : min= 24, max= 30, avg=25.20, stdev= 2.68, samples=5 00:44:00.812 lat (usec) : 500=2.90% 00:44:00.812 lat (msec) : 50=95.65% 00:44:00.812 cpu : usr=0.11%, sys=0.00%, ctx=70, majf=0, minf=1 00:44:00.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:00.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:00.812 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:00.812 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:00.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:00.812 00:44:00.812 Run status group 0 (all jobs): 00:44:00.812 READ: bw=21.2MiB/s (22.2MB/s), 100KiB/s-8590KiB/s (103kB/s-8796kB/s), io=75.4MiB (79.1MB), run=2716-3559msec 00:44:00.812 00:44:00.812 Disk stats (read/write): 00:44:00.812 nvme0n1: ios=6300/0, merge=0/0, ticks=2924/0, in_queue=2924, util=93.87% 00:44:00.812 nvme0n2: ios=6833/0, merge=0/0, ticks=3119/0, in_queue=3119, util=94.76% 00:44:00.812 nvme0n3: ios=5292/0, merge=0/0, ticks=2657/0, in_queue=2657, util=96.17% 00:44:00.812 nvme0n4: ios=102/0, merge=0/0, ticks=2674/0, in_queue=2674, util=100.00% 00:44:01.072 10:55:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:01.072 10:55:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:44:01.332 10:55:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:01.332 10:55:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:44:01.591 10:55:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:01.591 10:55:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:44:01.850 10:55:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:01.850 10:55:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:44:02.110 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:44:02.110 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2312194 00:44:02.110 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:44:02.110 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:02.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:02.370 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:02.370 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:44:02.370 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:44:02.370 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:02.370 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:44:02.370 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:02.370 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:44:02.370 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:44:02.370 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:44:02.370 nvmf hotplug test: fio failed as expected 00:44:02.370 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:02.630 rmmod nvme_tcp 00:44:02.630 rmmod nvme_fabrics 00:44:02.630 rmmod nvme_keyring 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2309872 ']' 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2309872 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2309872 ']' 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2309872 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2309872 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2309872' 00:44:02.630 killing process with pid 2309872 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2309872 00:44:02.630 10:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2309872 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:02.890 10:55:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:05.431 00:44:05.431 real 0m29.506s 00:44:05.431 user 1m42.281s 00:44:05.431 sys 0m10.324s 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:05.431 ************************************ 00:44:05.431 END TEST nvmf_fio_target 00:44:05.431 ************************************ 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:44:05.431 ************************************ 00:44:05.431 START TEST nvmf_bdevio 00:44:05.431 ************************************ 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:44:05.431 * Looking for test storage... 00:44:05.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:44:05.431 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:05.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.432 --rc genhtml_branch_coverage=1 00:44:05.432 --rc genhtml_function_coverage=1 00:44:05.432 --rc genhtml_legend=1 00:44:05.432 --rc geninfo_all_blocks=1 00:44:05.432 --rc geninfo_unexecuted_blocks=1 00:44:05.432 00:44:05.432 ' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:05.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.432 --rc genhtml_branch_coverage=1 00:44:05.432 --rc genhtml_function_coverage=1 00:44:05.432 --rc genhtml_legend=1 00:44:05.432 --rc geninfo_all_blocks=1 00:44:05.432 --rc geninfo_unexecuted_blocks=1 00:44:05.432 00:44:05.432 ' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:05.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.432 --rc genhtml_branch_coverage=1 00:44:05.432 --rc genhtml_function_coverage=1 00:44:05.432 --rc genhtml_legend=1 00:44:05.432 --rc geninfo_all_blocks=1 00:44:05.432 --rc geninfo_unexecuted_blocks=1 00:44:05.432 00:44:05.432 ' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:05.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.432 --rc genhtml_branch_coverage=1 00:44:05.432 --rc genhtml_function_coverage=1 00:44:05.432 --rc genhtml_legend=1 00:44:05.432 --rc geninfo_all_blocks=1 00:44:05.432 --rc geninfo_unexecuted_blocks=1 00:44:05.432 00:44:05.432 ' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:05.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:44:05.432 10:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:12.015 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:12.015 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:12.015 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:12.016 Found net devices under 0000:af:00.0: cvl_0_0 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:12.016 Found net devices under 0000:af:00.1: cvl_0_1 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:12.016 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:12.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:12.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:44:12.276 00:44:12.276 --- 10.0.0.2 ping statistics --- 00:44:12.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:12.276 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:12.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:12.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:44:12.276 00:44:12.276 --- 10.0.0.1 ping statistics --- 00:44:12.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:12.276 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2316449 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2316449 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2316449 ']' 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:12.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:12.276 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:12.276 [2024-12-09 10:55:13.442485] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:44:12.276 [2024-12-09 10:55:13.442562] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:12.536 [2024-12-09 10:55:13.544989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:12.536 [2024-12-09 10:55:13.589841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:12.536 [2024-12-09 10:55:13.589886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:12.536 [2024-12-09 10:55:13.589897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:12.536 [2024-12-09 10:55:13.589907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:12.536 [2024-12-09 10:55:13.589915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:12.536 [2024-12-09 10:55:13.591542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:44:12.536 [2024-12-09 10:55:13.591671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:44:12.536 [2024-12-09 10:55:13.591741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:12.536 [2024-12-09 10:55:13.591743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:44:12.536 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:12.536 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:44:12.536 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:12.536 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:12.537 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:12.797 [2024-12-09 10:55:13.756785] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:12.797 Malloc0 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.797 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:12.798 [2024-12-09 10:55:13.821337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:12.798 { 00:44:12.798 "params": { 00:44:12.798 "name": "Nvme$subsystem", 00:44:12.798 "trtype": "$TEST_TRANSPORT", 00:44:12.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:12.798 "adrfam": "ipv4", 00:44:12.798 "trsvcid": "$NVMF_PORT", 00:44:12.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:12.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:12.798 "hdgst": ${hdgst:-false}, 00:44:12.798 "ddgst": ${ddgst:-false} 00:44:12.798 }, 00:44:12.798 "method": "bdev_nvme_attach_controller" 00:44:12.798 } 00:44:12.798 EOF 00:44:12.798 )") 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:44:12.798 10:55:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:12.798 "params": { 00:44:12.798 "name": "Nvme1", 00:44:12.798 "trtype": "tcp", 00:44:12.798 "traddr": "10.0.0.2", 00:44:12.798 "adrfam": "ipv4", 00:44:12.798 "trsvcid": "4420", 00:44:12.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:12.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:12.798 "hdgst": false, 00:44:12.798 "ddgst": false 00:44:12.798 }, 00:44:12.798 "method": "bdev_nvme_attach_controller" 00:44:12.798 }' 00:44:12.798 [2024-12-09 10:55:13.876828] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:44:12.798 [2024-12-09 10:55:13.876881] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316480 ] 00:44:13.058 [2024-12-09 10:55:13.988251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:13.058 [2024-12-09 10:55:14.042846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:13.058 [2024-12-09 10:55:14.042933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:13.058 [2024-12-09 10:55:14.042938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:13.318 I/O targets: 00:44:13.318 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:44:13.318 00:44:13.318 00:44:13.318 CUnit - A unit testing framework for C - Version 2.1-3 00:44:13.318 http://cunit.sourceforge.net/ 00:44:13.318 00:44:13.318 00:44:13.318 Suite: bdevio tests on: Nvme1n1 00:44:13.318 Test: blockdev write read block ...passed 00:44:13.318 Test: blockdev write zeroes read block ...passed 00:44:13.318 Test: blockdev write zeroes read no split ...passed 00:44:13.318 Test: blockdev write zeroes read split ...passed 00:44:13.318 Test: blockdev write zeroes read split partial ...passed 00:44:13.318 Test: blockdev reset ...[2024-12-09 10:55:14.364343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:44:13.318 [2024-12-09 10:55:14.364431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b5750 (9): Bad file descriptor 00:44:13.318 [2024-12-09 10:55:14.379069] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:44:13.318 passed 00:44:13.318 Test: blockdev write read 8 blocks ...passed 00:44:13.318 Test: blockdev write read size > 128k ...passed 00:44:13.318 Test: blockdev write read invalid size ...passed 00:44:13.318 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:44:13.318 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:44:13.318 Test: blockdev write read max offset ...passed 00:44:13.579 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:44:13.579 Test: blockdev writev readv 8 blocks ...passed 00:44:13.579 Test: blockdev writev readv 30 x 1block ...passed 00:44:13.579 Test: blockdev writev readv block ...passed 00:44:13.579 Test: blockdev writev readv size > 128k ...passed 00:44:13.579 Test: blockdev writev readv size > 128k in two iovs ...passed 00:44:13.579 Test: blockdev comparev and writev ...[2024-12-09 10:55:14.548275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:13.579 [2024-12-09 10:55:14.548308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:13.579 [2024-12-09 10:55:14.548326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:13.579 [2024-12-09 10:55:14.548338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:13.579 [2024-12-09 10:55:14.548611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:13.579 [2024-12-09 10:55:14.548625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:13.579 [2024-12-09 10:55:14.548640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:13.579 [2024-12-09 10:55:14.548657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:13.579 [2024-12-09 10:55:14.548908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:13.579 [2024-12-09 10:55:14.548922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:13.579 [2024-12-09 10:55:14.548937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:13.579 [2024-12-09 10:55:14.548948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:13.579 [2024-12-09 10:55:14.549205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:13.579 [2024-12-09 10:55:14.549219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:13.579 [2024-12-09 10:55:14.549234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:13.579 [2024-12-09 10:55:14.549245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:13.579 passed 00:44:13.579 Test: blockdev nvme passthru rw ...passed 00:44:13.579 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:55:14.631028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:13.579 [2024-12-09 10:55:14.631048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:13.579 [2024-12-09 10:55:14.631168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:13.579 [2024-12-09 10:55:14.631182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:13.579 [2024-12-09 10:55:14.631297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:13.579 [2024-12-09 10:55:14.631310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:13.579 [2024-12-09 10:55:14.631426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:13.579 [2024-12-09 10:55:14.631440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:13.579 passed 00:44:13.579 Test: blockdev nvme admin passthru ...passed 00:44:13.579 Test: blockdev copy ...passed 00:44:13.579 00:44:13.579 Run Summary: Type Total Ran Passed Failed Inactive 00:44:13.579 suites 1 1 n/a 0 0 00:44:13.579 tests 23 23 23 0 0 00:44:13.579 asserts 152 152 152 0 n/a 00:44:13.579 00:44:13.579 Elapsed time = 0.872 seconds 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:13.840 rmmod nvme_tcp 00:44:13.840 rmmod nvme_fabrics 00:44:13.840 rmmod nvme_keyring 00:44:13.840 10:55:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:13.840 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:44:13.840 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:44:13.840 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2316449 ']' 00:44:13.840 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2316449 00:44:13.840 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2316449 ']' 00:44:13.840 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2316449 00:44:13.840 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:44:14.100 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:14.100 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2316449 00:44:14.100 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:44:14.100 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:44:14.100 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2316449' 00:44:14.100 killing process with pid 2316449 00:44:14.100 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2316449 00:44:14.100 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2316449 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:14.360 10:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:16.274 10:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:16.274 00:44:16.274 real 0m11.227s 00:44:16.274 user 0m11.059s 00:44:16.274 sys 0m5.673s 00:44:16.274 10:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:16.274 10:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:16.274 ************************************ 00:44:16.274 END TEST nvmf_bdevio 00:44:16.274 ************************************ 00:44:16.274 10:55:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:44:16.274 00:44:16.274 real 5m0.535s 00:44:16.274 user 11m5.838s 00:44:16.274 sys 1m55.706s 00:44:16.274 10:55:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:16.274 10:55:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:44:16.274 ************************************ 00:44:16.274 END TEST nvmf_target_core 00:44:16.274 ************************************ 00:44:16.534 10:55:17 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:44:16.534 10:55:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:16.534 10:55:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:16.534 10:55:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:16.534 ************************************ 00:44:16.534 START TEST nvmf_target_extra 00:44:16.534 ************************************ 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:44:16.534 * Looking for test storage... 00:44:16.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:16.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.534 --rc genhtml_branch_coverage=1 00:44:16.534 --rc genhtml_function_coverage=1 00:44:16.534 --rc genhtml_legend=1 00:44:16.534 --rc geninfo_all_blocks=1 00:44:16.534 --rc geninfo_unexecuted_blocks=1 00:44:16.534 00:44:16.534 ' 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:16.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.534 --rc genhtml_branch_coverage=1 00:44:16.534 --rc genhtml_function_coverage=1 00:44:16.534 --rc genhtml_legend=1 00:44:16.534 --rc geninfo_all_blocks=1 00:44:16.534 --rc geninfo_unexecuted_blocks=1 00:44:16.534 00:44:16.534 ' 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:16.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.534 --rc genhtml_branch_coverage=1 00:44:16.534 --rc genhtml_function_coverage=1 00:44:16.534 --rc genhtml_legend=1 00:44:16.534 --rc geninfo_all_blocks=1 00:44:16.534 --rc geninfo_unexecuted_blocks=1 00:44:16.534 00:44:16.534 ' 00:44:16.534 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:16.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.535 --rc genhtml_branch_coverage=1 00:44:16.535 --rc genhtml_function_coverage=1 00:44:16.535 --rc genhtml_legend=1 00:44:16.535 --rc geninfo_all_blocks=1 00:44:16.535 --rc geninfo_unexecuted_blocks=1 00:44:16.535 00:44:16.535 ' 00:44:16.535 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:16.535 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:16.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:44:16.795 ************************************ 00:44:16.795 START TEST nvmf_example 00:44:16.795 ************************************ 00:44:16.795 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:44:16.796 * Looking for test storage... 00:44:16.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:44:16.796 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:17.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.056 --rc genhtml_branch_coverage=1 00:44:17.056 --rc genhtml_function_coverage=1 00:44:17.056 --rc genhtml_legend=1 00:44:17.056 --rc geninfo_all_blocks=1 00:44:17.056 --rc geninfo_unexecuted_blocks=1 00:44:17.056 00:44:17.056 ' 00:44:17.056 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:17.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.056 --rc genhtml_branch_coverage=1 00:44:17.057 --rc genhtml_function_coverage=1 00:44:17.057 --rc genhtml_legend=1 00:44:17.057 --rc geninfo_all_blocks=1 00:44:17.057 --rc geninfo_unexecuted_blocks=1 00:44:17.057 00:44:17.057 ' 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:17.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.057 --rc genhtml_branch_coverage=1 00:44:17.057 --rc genhtml_function_coverage=1 00:44:17.057 --rc genhtml_legend=1 00:44:17.057 --rc geninfo_all_blocks=1 00:44:17.057 --rc geninfo_unexecuted_blocks=1 00:44:17.057 00:44:17.057 ' 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:17.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.057 --rc genhtml_branch_coverage=1 00:44:17.057 --rc genhtml_function_coverage=1 00:44:17.057 --rc genhtml_legend=1 00:44:17.057 --rc geninfo_all_blocks=1 00:44:17.057 --rc geninfo_unexecuted_blocks=1 00:44:17.057 00:44:17.057 ' 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:17.057 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:17.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:44:17.057 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:23.642 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:23.642 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:23.642 Found net devices under 0000:af:00.0: cvl_0_0 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:23.642 Found net devices under 0000:af:00.1: cvl_0_1 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:23.642 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:23.643 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:23.643 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:23.643 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:23.643 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:23.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:23.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:44:23.903 00:44:23.903 --- 10.0.0.2 ping statistics --- 00:44:23.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:23.903 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:23.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:23.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:44:23.903 00:44:23.903 --- 10.0.0.1 ping statistics --- 00:44:23.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:23.903 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:23.903 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2319953 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2319953 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2319953 ']' 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:23.903 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:23.904 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:23.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:23.904 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:23.904 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:44:25.284 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:37.504 Initializing NVMe Controllers 00:44:37.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:37.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:44:37.504 Initialization complete. Launching workers. 00:44:37.504 ======================================================== 00:44:37.504 Latency(us) 00:44:37.504 Device Information : IOPS MiB/s Average min max 00:44:37.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17563.85 68.61 3642.91 983.00 15321.22 00:44:37.504 ======================================================== 00:44:37.504 Total : 17563.85 68.61 3642.91 983.00 15321.22 00:44:37.504 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:37.504 rmmod nvme_tcp 00:44:37.504 rmmod nvme_fabrics 00:44:37.504 rmmod nvme_keyring 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2319953 ']' 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2319953 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2319953 ']' 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2319953 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2319953 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2319953' 00:44:37.504 killing process with pid 2319953 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2319953 00:44:37.504 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2319953 00:44:37.504 nvmf threads initialize successfully 00:44:37.504 bdev subsystem init successfully 00:44:37.504 created a nvmf target service 00:44:37.504 create targets's poll groups done 00:44:37.504 all subsystems of target started 00:44:37.504 nvmf target is running 00:44:37.504 all subsystems of target stopped 00:44:37.504 destroy targets's poll groups done 00:44:37.504 destroyed the nvmf target service 00:44:37.504 bdev subsystem finish successfully 00:44:37.504 nvmf threads destroy successfully 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:37.504 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:38.073 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:38.073 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:44:38.073 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:38.073 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:38.073 00:44:38.073 real 0m21.456s 00:44:38.073 user 0m48.142s 00:44:38.073 sys 0m7.109s 00:44:38.073 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:38.073 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:44:38.073 ************************************ 00:44:38.073 END TEST nvmf_example 00:44:38.073 ************************************ 00:44:38.333 10:55:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:44:38.333 10:55:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:38.333 10:55:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:38.333 10:55:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:44:38.333 ************************************ 00:44:38.333 START TEST nvmf_filesystem 00:44:38.333 ************************************ 00:44:38.334 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:44:38.334 * Looking for test storage... 00:44:38.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:38.334 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:38.334 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:44:38.334 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:38.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.598 --rc genhtml_branch_coverage=1 00:44:38.598 --rc genhtml_function_coverage=1 00:44:38.598 --rc genhtml_legend=1 00:44:38.598 --rc geninfo_all_blocks=1 00:44:38.598 --rc geninfo_unexecuted_blocks=1 00:44:38.598 00:44:38.598 ' 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:38.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.598 --rc genhtml_branch_coverage=1 00:44:38.598 --rc genhtml_function_coverage=1 00:44:38.598 --rc genhtml_legend=1 00:44:38.598 --rc geninfo_all_blocks=1 00:44:38.598 --rc geninfo_unexecuted_blocks=1 00:44:38.598 00:44:38.598 ' 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:38.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.598 --rc genhtml_branch_coverage=1 00:44:38.598 --rc genhtml_function_coverage=1 00:44:38.598 --rc genhtml_legend=1 00:44:38.598 --rc geninfo_all_blocks=1 00:44:38.598 --rc geninfo_unexecuted_blocks=1 00:44:38.598 00:44:38.598 ' 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:38.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.598 --rc genhtml_branch_coverage=1 00:44:38.598 --rc genhtml_function_coverage=1 00:44:38.598 --rc genhtml_legend=1 00:44:38.598 --rc geninfo_all_blocks=1 00:44:38.598 --rc geninfo_unexecuted_blocks=1 00:44:38.598 00:44:38.598 ' 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:44:38.598 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:44:38.599 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:44:38.600 #define SPDK_CONFIG_H 00:44:38.600 #define SPDK_CONFIG_AIO_FSDEV 1 00:44:38.600 #define SPDK_CONFIG_APPS 1 00:44:38.600 #define SPDK_CONFIG_ARCH native 00:44:38.600 #undef SPDK_CONFIG_ASAN 00:44:38.600 #undef SPDK_CONFIG_AVAHI 00:44:38.600 #undef SPDK_CONFIG_CET 00:44:38.600 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:44:38.600 #define SPDK_CONFIG_COVERAGE 1 00:44:38.600 #define SPDK_CONFIG_CROSS_PREFIX 00:44:38.600 #undef SPDK_CONFIG_CRYPTO 00:44:38.600 #undef SPDK_CONFIG_CRYPTO_MLX5 00:44:38.600 #undef SPDK_CONFIG_CUSTOMOCF 00:44:38.600 #undef SPDK_CONFIG_DAOS 00:44:38.600 #define SPDK_CONFIG_DAOS_DIR 00:44:38.600 #define SPDK_CONFIG_DEBUG 1 00:44:38.600 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:44:38.600 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:44:38.600 #define SPDK_CONFIG_DPDK_INC_DIR 00:44:38.600 #define SPDK_CONFIG_DPDK_LIB_DIR 00:44:38.600 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:44:38.600 #undef SPDK_CONFIG_DPDK_UADK 00:44:38.600 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:44:38.600 #define SPDK_CONFIG_EXAMPLES 1 00:44:38.600 #undef SPDK_CONFIG_FC 00:44:38.600 #define SPDK_CONFIG_FC_PATH 00:44:38.600 #define SPDK_CONFIG_FIO_PLUGIN 1 00:44:38.600 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:44:38.600 #define SPDK_CONFIG_FSDEV 1 00:44:38.600 #undef SPDK_CONFIG_FUSE 00:44:38.600 #undef SPDK_CONFIG_FUZZER 00:44:38.600 #define SPDK_CONFIG_FUZZER_LIB 00:44:38.600 #undef SPDK_CONFIG_GOLANG 00:44:38.600 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:44:38.600 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:44:38.600 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:44:38.600 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:44:38.600 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:44:38.600 #undef SPDK_CONFIG_HAVE_LIBBSD 00:44:38.600 #undef SPDK_CONFIG_HAVE_LZ4 00:44:38.600 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:44:38.600 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:44:38.600 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:44:38.600 #define SPDK_CONFIG_IDXD 1 00:44:38.600 #define SPDK_CONFIG_IDXD_KERNEL 1 00:44:38.600 #undef SPDK_CONFIG_IPSEC_MB 00:44:38.600 #define SPDK_CONFIG_IPSEC_MB_DIR 00:44:38.600 #define SPDK_CONFIG_ISAL 1 00:44:38.600 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:44:38.600 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:44:38.600 #define SPDK_CONFIG_LIBDIR 00:44:38.600 #undef SPDK_CONFIG_LTO 00:44:38.600 #define SPDK_CONFIG_MAX_LCORES 128 00:44:38.600 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:44:38.600 #define SPDK_CONFIG_NVME_CUSE 1 00:44:38.600 #undef SPDK_CONFIG_OCF 00:44:38.600 #define SPDK_CONFIG_OCF_PATH 00:44:38.600 #define SPDK_CONFIG_OPENSSL_PATH 00:44:38.600 #undef SPDK_CONFIG_PGO_CAPTURE 00:44:38.600 #define SPDK_CONFIG_PGO_DIR 00:44:38.600 #undef SPDK_CONFIG_PGO_USE 00:44:38.600 #define SPDK_CONFIG_PREFIX /usr/local 00:44:38.600 #undef SPDK_CONFIG_RAID5F 00:44:38.600 #undef SPDK_CONFIG_RBD 00:44:38.600 #define SPDK_CONFIG_RDMA 1 00:44:38.600 #define SPDK_CONFIG_RDMA_PROV verbs 00:44:38.600 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:44:38.600 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:44:38.600 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:44:38.600 #define SPDK_CONFIG_SHARED 1 00:44:38.600 #undef SPDK_CONFIG_SMA 00:44:38.600 #define SPDK_CONFIG_TESTS 1 00:44:38.600 #undef SPDK_CONFIG_TSAN 00:44:38.600 #define SPDK_CONFIG_UBLK 1 00:44:38.600 #define SPDK_CONFIG_UBSAN 1 00:44:38.600 #undef SPDK_CONFIG_UNIT_TESTS 00:44:38.600 #undef SPDK_CONFIG_URING 00:44:38.600 #define SPDK_CONFIG_URING_PATH 00:44:38.600 #undef SPDK_CONFIG_URING_ZNS 00:44:38.600 #undef SPDK_CONFIG_USDT 00:44:38.600 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:44:38.600 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:44:38.600 #define SPDK_CONFIG_VFIO_USER 1 00:44:38.600 #define SPDK_CONFIG_VFIO_USER_DIR 00:44:38.600 #define SPDK_CONFIG_VHOST 1 00:44:38.600 #define SPDK_CONFIG_VIRTIO 1 00:44:38.600 #undef SPDK_CONFIG_VTUNE 00:44:38.600 #define SPDK_CONFIG_VTUNE_DIR 00:44:38.600 #define SPDK_CONFIG_WERROR 1 00:44:38.600 #define SPDK_CONFIG_WPDK_DIR 00:44:38.600 #undef SPDK_CONFIG_XNVME 00:44:38.600 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:44:38.600 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:44:38.601 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j72 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:44:38.602 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2322003 ]] 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2322003 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.80WX7b 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.80WX7b/tests/target /tmp/spdk.80WX7b 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50640351232 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61738127360 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11097776128 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30857695232 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30869061632 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12324798464 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12347625472 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22827008 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30868680704 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30869065728 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=385024 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173798400 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173810688 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:44:38.603 * Looking for test storage... 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=50640351232 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13312368640 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:38.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:44:38.603 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:38.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.864 --rc genhtml_branch_coverage=1 00:44:38.864 --rc genhtml_function_coverage=1 00:44:38.864 --rc genhtml_legend=1 00:44:38.864 --rc geninfo_all_blocks=1 00:44:38.864 --rc geninfo_unexecuted_blocks=1 00:44:38.864 00:44:38.864 ' 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:38.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.864 --rc genhtml_branch_coverage=1 00:44:38.864 --rc genhtml_function_coverage=1 00:44:38.864 --rc genhtml_legend=1 00:44:38.864 --rc geninfo_all_blocks=1 00:44:38.864 --rc geninfo_unexecuted_blocks=1 00:44:38.864 00:44:38.864 ' 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:38.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.864 --rc genhtml_branch_coverage=1 00:44:38.864 --rc genhtml_function_coverage=1 00:44:38.864 --rc genhtml_legend=1 00:44:38.864 --rc geninfo_all_blocks=1 00:44:38.864 --rc geninfo_unexecuted_blocks=1 00:44:38.864 00:44:38.864 ' 00:44:38.864 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:38.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.864 --rc genhtml_branch_coverage=1 00:44:38.864 --rc genhtml_function_coverage=1 00:44:38.865 --rc genhtml_legend=1 00:44:38.865 --rc geninfo_all_blocks=1 00:44:38.865 --rc geninfo_unexecuted_blocks=1 00:44:38.865 00:44:38.865 ' 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:38.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:44:38.865 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:45.453 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:45.454 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:45.454 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:45.454 Found net devices under 0000:af:00.0: cvl_0_0 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:45.454 Found net devices under 0000:af:00.1: cvl_0_1 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:45.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:45.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:44:45.454 00:44:45.454 --- 10.0.0.2 ping statistics --- 00:44:45.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:45.454 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:45.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:45.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:44:45.454 00:44:45.454 --- 10.0.0.1 ping statistics --- 00:44:45.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:45.454 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:44:45.454 ************************************ 00:44:45.454 START TEST nvmf_filesystem_no_in_capsule 00:44:45.454 ************************************ 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2324849 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2324849 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2324849 ']' 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:45.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:45.454 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:45.455 [2024-12-09 10:55:46.437338] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:44:45.455 [2024-12-09 10:55:46.437411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:45.455 [2024-12-09 10:55:46.569932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:45.455 [2024-12-09 10:55:46.623398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:45.455 [2024-12-09 10:55:46.623446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:45.455 [2024-12-09 10:55:46.623461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:45.455 [2024-12-09 10:55:46.623475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:45.455 [2024-12-09 10:55:46.623487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:45.455 [2024-12-09 10:55:46.625316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:45.455 [2024-12-09 10:55:46.625415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:45.455 [2024-12-09 10:55:46.625507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:45.455 [2024-12-09 10:55:46.625512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:45.716 [2024-12-09 10:55:46.784220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.716 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:45.977 Malloc1 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:45.977 [2024-12-09 10:55:46.938553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:45.977 { 00:44:45.977 "name": "Malloc1", 00:44:45.977 "aliases": [ 00:44:45.977 "96c899fb-5be2-42d0-af3c-3f377d3cfc9e" 00:44:45.977 ], 00:44:45.977 "product_name": "Malloc disk", 00:44:45.977 "block_size": 512, 00:44:45.977 "num_blocks": 1048576, 00:44:45.977 "uuid": "96c899fb-5be2-42d0-af3c-3f377d3cfc9e", 00:44:45.977 "assigned_rate_limits": { 00:44:45.977 "rw_ios_per_sec": 0, 00:44:45.977 "rw_mbytes_per_sec": 0, 00:44:45.977 "r_mbytes_per_sec": 0, 00:44:45.977 "w_mbytes_per_sec": 0 00:44:45.977 }, 00:44:45.977 "claimed": true, 00:44:45.977 "claim_type": "exclusive_write", 00:44:45.977 "zoned": false, 00:44:45.977 "supported_io_types": { 00:44:45.977 "read": true, 00:44:45.977 "write": true, 00:44:45.977 "unmap": true, 00:44:45.977 "flush": true, 00:44:45.977 "reset": true, 00:44:45.977 "nvme_admin": false, 00:44:45.977 "nvme_io": false, 00:44:45.977 "nvme_io_md": false, 00:44:45.977 "write_zeroes": true, 00:44:45.977 "zcopy": true, 00:44:45.977 "get_zone_info": false, 00:44:45.977 "zone_management": false, 00:44:45.977 "zone_append": false, 00:44:45.977 "compare": false, 00:44:45.977 "compare_and_write": false, 00:44:45.977 "abort": true, 00:44:45.977 "seek_hole": false, 00:44:45.977 "seek_data": false, 00:44:45.977 "copy": true, 00:44:45.977 "nvme_iov_md": false 00:44:45.977 }, 00:44:45.977 "memory_domains": [ 00:44:45.977 { 00:44:45.977 "dma_device_id": "system", 00:44:45.977 "dma_device_type": 1 00:44:45.977 }, 00:44:45.977 { 00:44:45.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:45.977 "dma_device_type": 2 00:44:45.977 } 00:44:45.977 ], 00:44:45.977 "driver_specific": {} 00:44:45.977 } 00:44:45.977 ]' 00:44:45.977 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:45.977 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:44:45.977 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:45.977 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:44:45.977 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:44:45.977 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:44:45.977 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:44:45.977 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:46.917 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:44:46.917 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:44:46.917 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:44:46.917 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:44:46.917 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:44:48.827 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:44:49.086 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:44:49.086 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:50.468 ************************************ 00:44:50.468 START TEST filesystem_ext4 00:44:50.468 ************************************ 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:44:50.468 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:44:50.468 mke2fs 1.47.0 (5-Feb-2023) 00:44:50.468 Discarding device blocks: 0/522240 done 00:44:50.468 Creating filesystem with 522240 1k blocks and 130560 inodes 00:44:50.468 Filesystem UUID: 0531b90f-050c-4990-ab36-90b31af7b967 00:44:50.468 Superblock backups stored on blocks: 00:44:50.468 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:44:50.468 00:44:50.468 Allocating group tables: 0/64 done 00:44:50.468 Writing inode tables: 0/64 done 00:44:53.009 Creating journal (8192 blocks): done 00:44:53.009 Writing superblocks and filesystem accounting information: 0/64 done 00:44:53.009 00:44:53.009 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:44:53.009 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2324849 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:44:58.290 00:44:58.290 real 0m7.981s 00:44:58.290 user 0m0.046s 00:44:58.290 sys 0m0.078s 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:44:58.290 ************************************ 00:44:58.290 END TEST filesystem_ext4 00:44:58.290 ************************************ 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:58.290 ************************************ 00:44:58.290 START TEST filesystem_btrfs 00:44:58.290 ************************************ 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:44:58.290 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:44:58.861 btrfs-progs v6.8.1 00:44:58.861 See https://btrfs.readthedocs.io for more information. 00:44:58.861 00:44:58.861 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:44:58.861 NOTE: several default settings have changed in version 5.15, please make sure 00:44:58.861 this does not affect your deployments: 00:44:58.861 - DUP for metadata (-m dup) 00:44:58.861 - enabled no-holes (-O no-holes) 00:44:58.861 - enabled free-space-tree (-R free-space-tree) 00:44:58.861 00:44:58.861 Label: (null) 00:44:58.861 UUID: 3241a701-1081-41ca-bc6d-7b84acd03bc6 00:44:58.861 Node size: 16384 00:44:58.861 Sector size: 4096 (CPU page size: 4096) 00:44:58.861 Filesystem size: 510.00MiB 00:44:58.861 Block group profiles: 00:44:58.861 Data: single 8.00MiB 00:44:58.861 Metadata: DUP 32.00MiB 00:44:58.861 System: DUP 8.00MiB 00:44:58.861 SSD detected: yes 00:44:58.861 Zoned device: no 00:44:58.861 Features: extref, skinny-metadata, no-holes, free-space-tree 00:44:58.861 Checksum: crc32c 00:44:58.861 Number of devices: 1 00:44:58.861 Devices: 00:44:58.861 ID SIZE PATH 00:44:58.861 1 510.00MiB /dev/nvme0n1p1 00:44:58.861 00:44:58.861 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:44:58.861 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:44:59.120 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:44:59.120 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:44:59.120 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2324849 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:44:59.381 00:44:59.381 real 0m1.009s 00:44:59.381 user 0m0.043s 00:44:59.381 sys 0m0.186s 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:44:59.381 ************************************ 00:44:59.381 END TEST filesystem_btrfs 00:44:59.381 ************************************ 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:44:59.381 ************************************ 00:44:59.381 START TEST filesystem_xfs 00:44:59.381 ************************************ 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:44:59.381 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:44:59.641 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:44:59.641 = sectsz=512 attr=2, projid32bit=1 00:44:59.641 = crc=1 finobt=1, sparse=1, rmapbt=0 00:44:59.641 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:44:59.641 data = bsize=4096 blocks=130560, imaxpct=25 00:44:59.641 = sunit=0 swidth=0 blks 00:44:59.641 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:44:59.641 log =internal log bsize=4096 blocks=16384, version=2 00:44:59.641 = sectsz=512 sunit=0 blks, lazy-count=1 00:44:59.641 realtime =none extsz=4096 blocks=0, rtextents=0 00:45:00.581 Discarding blocks...Done. 00:45:00.581 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:45:00.581 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2324849 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:45:03.123 00:45:03.123 real 0m3.781s 00:45:03.123 user 0m0.032s 00:45:03.123 sys 0m0.133s 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:45:03.123 ************************************ 00:45:03.123 END TEST filesystem_xfs 00:45:03.123 ************************************ 00:45:03.123 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:45:03.382 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:45:03.382 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:03.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2324849 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2324849 ']' 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2324849 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324849 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324849' 00:45:03.643 killing process with pid 2324849 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2324849 00:45:03.643 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2324849 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:45:04.213 00:45:04.213 real 0m18.855s 00:45:04.213 user 1m13.154s 00:45:04.213 sys 0m2.493s 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:04.213 ************************************ 00:45:04.213 END TEST nvmf_filesystem_no_in_capsule 00:45:04.213 ************************************ 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:45:04.213 ************************************ 00:45:04.213 START TEST nvmf_filesystem_in_capsule 00:45:04.213 ************************************ 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2327440 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2327440 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2327440 ']' 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:04.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:04.213 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:04.213 [2024-12-09 10:56:05.369671] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:45:04.213 [2024-12-09 10:56:05.369725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:04.473 [2024-12-09 10:56:05.485391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:04.473 [2024-12-09 10:56:05.536413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:04.473 [2024-12-09 10:56:05.536468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:04.473 [2024-12-09 10:56:05.536485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:04.473 [2024-12-09 10:56:05.536499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:04.473 [2024-12-09 10:56:05.536511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:04.473 [2024-12-09 10:56:05.538245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:04.473 [2024-12-09 10:56:05.538331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:04.473 [2024-12-09 10:56:05.538425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:04.473 [2024-12-09 10:56:05.538430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:04.473 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:04.473 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:45:04.473 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:04.473 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:04.473 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:04.733 [2024-12-09 10:56:05.697158] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:04.733 Malloc1 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:04.733 [2024-12-09 10:56:05.871870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:04.733 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:45:04.734 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:45:04.734 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:45:04.734 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.734 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:04.734 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.994 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:04.994 { 00:45:04.994 "name": "Malloc1", 00:45:04.994 "aliases": [ 00:45:04.994 "93df6602-2b19-4b33-b55b-38a92010d66b" 00:45:04.994 ], 00:45:04.994 "product_name": "Malloc disk", 00:45:04.994 "block_size": 512, 00:45:04.994 "num_blocks": 1048576, 00:45:04.994 "uuid": "93df6602-2b19-4b33-b55b-38a92010d66b", 00:45:04.994 "assigned_rate_limits": { 00:45:04.994 "rw_ios_per_sec": 0, 00:45:04.994 "rw_mbytes_per_sec": 0, 00:45:04.994 "r_mbytes_per_sec": 0, 00:45:04.994 "w_mbytes_per_sec": 0 00:45:04.994 }, 00:45:04.994 "claimed": true, 00:45:04.994 "claim_type": "exclusive_write", 00:45:04.994 "zoned": false, 00:45:04.994 "supported_io_types": { 00:45:04.994 "read": true, 00:45:04.994 "write": true, 00:45:04.994 "unmap": true, 00:45:04.994 "flush": true, 00:45:04.994 "reset": true, 00:45:04.994 "nvme_admin": false, 00:45:04.994 "nvme_io": false, 00:45:04.994 "nvme_io_md": false, 00:45:04.994 "write_zeroes": true, 00:45:04.994 "zcopy": true, 00:45:04.994 "get_zone_info": false, 00:45:04.994 "zone_management": false, 00:45:04.994 "zone_append": false, 00:45:04.994 "compare": false, 00:45:04.994 "compare_and_write": false, 00:45:04.994 "abort": true, 00:45:04.994 "seek_hole": false, 00:45:04.994 "seek_data": false, 00:45:04.994 "copy": true, 00:45:04.994 "nvme_iov_md": false 00:45:04.994 }, 00:45:04.994 "memory_domains": [ 00:45:04.994 { 00:45:04.994 "dma_device_id": "system", 00:45:04.994 "dma_device_type": 1 00:45:04.994 }, 00:45:04.994 { 00:45:04.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:45:04.994 "dma_device_type": 2 00:45:04.994 } 00:45:04.994 ], 00:45:04.994 "driver_specific": {} 00:45:04.994 } 00:45:04.994 ]' 00:45:04.994 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:04.994 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:45:04.994 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:04.994 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:45:04.994 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:45:04.994 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:45:04.994 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:45:04.994 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:45:05.934 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:45:05.934 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:45:05.934 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:45:05.934 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:45:05.934 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:45:07.847 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:45:08.107 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:45:08.676 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:09.617 ************************************ 00:45:09.617 START TEST filesystem_in_capsule_ext4 00:45:09.617 ************************************ 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:45:09.617 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:45:09.617 mke2fs 1.47.0 (5-Feb-2023) 00:45:09.877 Discarding device blocks: 0/522240 done 00:45:09.877 Creating filesystem with 522240 1k blocks and 130560 inodes 00:45:09.877 Filesystem UUID: 04aad362-cea7-4220-b3e9-725e1913343e 00:45:09.877 Superblock backups stored on blocks: 00:45:09.877 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:45:09.877 00:45:09.877 Allocating group tables: 0/64 done 00:45:09.877 Writing inode tables: 0/64 done 00:45:09.877 Creating journal (8192 blocks): done 00:45:11.828 Writing superblocks and filesystem accounting information: 0/64 done 00:45:11.828 00:45:11.828 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:45:11.828 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:45:17.108 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:45:17.108 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:45:17.108 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2327440 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:45:17.368 00:45:17.368 real 0m7.573s 00:45:17.368 user 0m0.031s 00:45:17.368 sys 0m0.094s 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:45:17.368 ************************************ 00:45:17.368 END TEST filesystem_in_capsule_ext4 00:45:17.368 ************************************ 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:17.368 ************************************ 00:45:17.368 START TEST filesystem_in_capsule_btrfs 00:45:17.368 ************************************ 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:45:17.368 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:45:17.627 btrfs-progs v6.8.1 00:45:17.627 See https://btrfs.readthedocs.io for more information. 00:45:17.627 00:45:17.627 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:45:17.628 NOTE: several default settings have changed in version 5.15, please make sure 00:45:17.628 this does not affect your deployments: 00:45:17.628 - DUP for metadata (-m dup) 00:45:17.628 - enabled no-holes (-O no-holes) 00:45:17.628 - enabled free-space-tree (-R free-space-tree) 00:45:17.628 00:45:17.628 Label: (null) 00:45:17.628 UUID: b6fe1ecc-6c27-47d7-b8e5-239dcf519376 00:45:17.628 Node size: 16384 00:45:17.628 Sector size: 4096 (CPU page size: 4096) 00:45:17.628 Filesystem size: 510.00MiB 00:45:17.628 Block group profiles: 00:45:17.628 Data: single 8.00MiB 00:45:17.628 Metadata: DUP 32.00MiB 00:45:17.628 System: DUP 8.00MiB 00:45:17.628 SSD detected: yes 00:45:17.628 Zoned device: no 00:45:17.628 Features: extref, skinny-metadata, no-holes, free-space-tree 00:45:17.628 Checksum: crc32c 00:45:17.628 Number of devices: 1 00:45:17.628 Devices: 00:45:17.628 ID SIZE PATH 00:45:17.628 1 510.00MiB /dev/nvme0n1p1 00:45:17.628 00:45:17.628 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:45:17.628 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2327440 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:45:18.565 00:45:18.565 real 0m1.232s 00:45:18.565 user 0m0.042s 00:45:18.565 sys 0m0.128s 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:45:18.565 ************************************ 00:45:18.565 END TEST filesystem_in_capsule_btrfs 00:45:18.565 ************************************ 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:18.565 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:18.825 ************************************ 00:45:18.825 START TEST filesystem_in_capsule_xfs 00:45:18.825 ************************************ 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:45:18.825 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:45:18.825 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:45:18.825 = sectsz=512 attr=2, projid32bit=1 00:45:18.825 = crc=1 finobt=1, sparse=1, rmapbt=0 00:45:18.825 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:45:18.825 data = bsize=4096 blocks=130560, imaxpct=25 00:45:18.825 = sunit=0 swidth=0 blks 00:45:18.825 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:45:18.825 log =internal log bsize=4096 blocks=16384, version=2 00:45:18.825 = sectsz=512 sunit=0 blks, lazy-count=1 00:45:18.825 realtime =none extsz=4096 blocks=0, rtextents=0 00:45:19.763 Discarding blocks...Done. 00:45:19.763 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:45:19.763 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:45:22.299 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:45:22.299 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:45:22.299 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:45:22.299 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:45:22.300 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:45:22.300 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:45:22.300 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2327440 00:45:22.300 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:45:22.300 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:45:22.300 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:45:22.300 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:45:22.300 00:45:22.300 real 0m3.599s 00:45:22.300 user 0m0.033s 00:45:22.300 sys 0m0.087s 00:45:22.300 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:22.300 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:45:22.300 ************************************ 00:45:22.300 END TEST filesystem_in_capsule_xfs 00:45:22.300 ************************************ 00:45:22.300 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:22.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2327440 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2327440 ']' 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2327440 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2327440 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2327440' 00:45:22.560 killing process with pid 2327440 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2327440 00:45:22.560 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2327440 00:45:23.130 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:45:23.130 00:45:23.130 real 0m18.837s 00:45:23.130 user 1m13.127s 00:45:23.130 sys 0m2.428s 00:45:23.130 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:23.130 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:45:23.130 ************************************ 00:45:23.130 END TEST nvmf_filesystem_in_capsule 00:45:23.130 ************************************ 00:45:23.130 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:45:23.130 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:23.130 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:45:23.130 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:23.130 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:45:23.130 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:23.131 rmmod nvme_tcp 00:45:23.131 rmmod nvme_fabrics 00:45:23.131 rmmod nvme_keyring 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:23.131 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:25.674 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:25.675 00:45:25.675 real 0m47.035s 00:45:25.675 user 2m28.822s 00:45:25.675 sys 0m9.870s 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:45:25.675 ************************************ 00:45:25.675 END TEST nvmf_filesystem 00:45:25.675 ************************************ 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:45:25.675 ************************************ 00:45:25.675 START TEST nvmf_target_discovery 00:45:25.675 ************************************ 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:45:25.675 * Looking for test storage... 00:45:25.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:25.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.675 --rc genhtml_branch_coverage=1 00:45:25.675 --rc genhtml_function_coverage=1 00:45:25.675 --rc genhtml_legend=1 00:45:25.675 --rc geninfo_all_blocks=1 00:45:25.675 --rc geninfo_unexecuted_blocks=1 00:45:25.675 00:45:25.675 ' 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:25.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.675 --rc genhtml_branch_coverage=1 00:45:25.675 --rc genhtml_function_coverage=1 00:45:25.675 --rc genhtml_legend=1 00:45:25.675 --rc geninfo_all_blocks=1 00:45:25.675 --rc geninfo_unexecuted_blocks=1 00:45:25.675 00:45:25.675 ' 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:25.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.675 --rc genhtml_branch_coverage=1 00:45:25.675 --rc genhtml_function_coverage=1 00:45:25.675 --rc genhtml_legend=1 00:45:25.675 --rc geninfo_all_blocks=1 00:45:25.675 --rc geninfo_unexecuted_blocks=1 00:45:25.675 00:45:25.675 ' 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:25.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.675 --rc genhtml_branch_coverage=1 00:45:25.675 --rc genhtml_function_coverage=1 00:45:25.675 --rc genhtml_legend=1 00:45:25.675 --rc geninfo_all_blocks=1 00:45:25.675 --rc geninfo_unexecuted_blocks=1 00:45:25.675 00:45:25.675 ' 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:25.675 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:25.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:45:25.676 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:45:32.253 Found 0000:af:00.0 (0x8086 - 0x159b) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:45:32.253 Found 0000:af:00.1 (0x8086 - 0x159b) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:45:32.253 Found net devices under 0000:af:00.0: cvl_0_0 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:45:32.253 Found net devices under 0000:af:00.1: cvl_0_1 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:32.253 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:32.254 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:32.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:32.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:45:32.513 00:45:32.513 --- 10.0.0.2 ping statistics --- 00:45:32.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:32.513 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:32.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:32.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:45:32.513 00:45:32.513 --- 10.0.0.1 ping statistics --- 00:45:32.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:32.513 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:45:32.513 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2333326 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2333326 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2333326 ']' 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:32.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:32.514 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:32.514 [2024-12-09 10:56:33.627453] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:45:32.514 [2024-12-09 10:56:33.627526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:32.773 [2024-12-09 10:56:33.760754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:32.773 [2024-12-09 10:56:33.815055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:32.773 [2024-12-09 10:56:33.815106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:32.773 [2024-12-09 10:56:33.815122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:32.773 [2024-12-09 10:56:33.815136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:32.773 [2024-12-09 10:56:33.815147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:32.773 [2024-12-09 10:56:33.817018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:32.773 [2024-12-09 10:56:33.817105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:32.773 [2024-12-09 10:56:33.817200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:32.773 [2024-12-09 10:56:33.817205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:32.773 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:32.773 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:45:32.773 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:32.773 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:32.773 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:33.034 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:33.034 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 [2024-12-09 10:56:33.990131] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:33.034 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 Null1 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 [2024-12-09 10:56:34.062892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 Null2 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 Null3 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.034 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.035 Null4 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.035 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 4420 00:45:33.295 00:45:33.295 Discovery Log Number of Records 6, Generation counter 6 00:45:33.295 =====Discovery Log Entry 0====== 00:45:33.295 trtype: tcp 00:45:33.295 adrfam: ipv4 00:45:33.295 subtype: current discovery subsystem 00:45:33.295 treq: not required 00:45:33.295 portid: 0 00:45:33.295 trsvcid: 4420 00:45:33.295 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:33.295 traddr: 10.0.0.2 00:45:33.295 eflags: explicit discovery connections, duplicate discovery information 00:45:33.295 sectype: none 00:45:33.295 =====Discovery Log Entry 1====== 00:45:33.295 trtype: tcp 00:45:33.295 adrfam: ipv4 00:45:33.295 subtype: nvme subsystem 00:45:33.295 treq: not required 00:45:33.295 portid: 0 00:45:33.295 trsvcid: 4420 00:45:33.295 subnqn: nqn.2016-06.io.spdk:cnode1 00:45:33.295 traddr: 10.0.0.2 00:45:33.295 eflags: none 00:45:33.295 sectype: none 00:45:33.295 =====Discovery Log Entry 2====== 00:45:33.295 trtype: tcp 00:45:33.295 adrfam: ipv4 00:45:33.295 subtype: nvme subsystem 00:45:33.295 treq: not required 00:45:33.295 portid: 0 00:45:33.296 trsvcid: 4420 00:45:33.296 subnqn: nqn.2016-06.io.spdk:cnode2 00:45:33.296 traddr: 10.0.0.2 00:45:33.296 eflags: none 00:45:33.296 sectype: none 00:45:33.296 =====Discovery Log Entry 3====== 00:45:33.296 trtype: tcp 00:45:33.296 adrfam: ipv4 00:45:33.296 subtype: nvme subsystem 00:45:33.296 treq: not required 00:45:33.296 portid: 0 00:45:33.296 trsvcid: 4420 00:45:33.296 subnqn: nqn.2016-06.io.spdk:cnode3 00:45:33.296 traddr: 10.0.0.2 00:45:33.296 eflags: none 00:45:33.296 sectype: none 00:45:33.296 =====Discovery Log Entry 4====== 00:45:33.296 trtype: tcp 00:45:33.296 adrfam: ipv4 00:45:33.296 subtype: nvme subsystem 00:45:33.296 treq: not required 00:45:33.296 portid: 0 00:45:33.296 trsvcid: 4420 00:45:33.296 subnqn: nqn.2016-06.io.spdk:cnode4 00:45:33.296 traddr: 10.0.0.2 00:45:33.296 eflags: none 00:45:33.296 sectype: none 00:45:33.296 =====Discovery Log Entry 5====== 00:45:33.296 trtype: tcp 00:45:33.296 adrfam: ipv4 00:45:33.296 subtype: discovery subsystem referral 00:45:33.296 treq: not required 00:45:33.296 portid: 0 00:45:33.296 trsvcid: 4430 00:45:33.296 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:33.296 traddr: 10.0.0.2 00:45:33.296 eflags: none 00:45:33.296 sectype: none 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:45:33.296 Perform nvmf subsystem discovery via RPC 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.296 [ 00:45:33.296 { 00:45:33.296 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:33.296 "subtype": "Discovery", 00:45:33.296 "listen_addresses": [ 00:45:33.296 { 00:45:33.296 "trtype": "TCP", 00:45:33.296 "adrfam": "IPv4", 00:45:33.296 "traddr": "10.0.0.2", 00:45:33.296 "trsvcid": "4420" 00:45:33.296 } 00:45:33.296 ], 00:45:33.296 "allow_any_host": true, 00:45:33.296 "hosts": [] 00:45:33.296 }, 00:45:33.296 { 00:45:33.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:33.296 "subtype": "NVMe", 00:45:33.296 "listen_addresses": [ 00:45:33.296 { 00:45:33.296 "trtype": "TCP", 00:45:33.296 "adrfam": "IPv4", 00:45:33.296 "traddr": "10.0.0.2", 00:45:33.296 "trsvcid": "4420" 00:45:33.296 } 00:45:33.296 ], 00:45:33.296 "allow_any_host": true, 00:45:33.296 "hosts": [], 00:45:33.296 "serial_number": "SPDK00000000000001", 00:45:33.296 "model_number": "SPDK bdev Controller", 00:45:33.296 "max_namespaces": 32, 00:45:33.296 "min_cntlid": 1, 00:45:33.296 "max_cntlid": 65519, 00:45:33.296 "namespaces": [ 00:45:33.296 { 00:45:33.296 "nsid": 1, 00:45:33.296 "bdev_name": "Null1", 00:45:33.296 "name": "Null1", 00:45:33.296 "nguid": "231C7B59BC4B4A25A721688197A9275E", 00:45:33.296 "uuid": "231c7b59-bc4b-4a25-a721-688197a9275e" 00:45:33.296 } 00:45:33.296 ] 00:45:33.296 }, 00:45:33.296 { 00:45:33.296 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:45:33.296 "subtype": "NVMe", 00:45:33.296 "listen_addresses": [ 00:45:33.296 { 00:45:33.296 "trtype": "TCP", 00:45:33.296 "adrfam": "IPv4", 00:45:33.296 "traddr": "10.0.0.2", 00:45:33.296 "trsvcid": "4420" 00:45:33.296 } 00:45:33.296 ], 00:45:33.296 "allow_any_host": true, 00:45:33.296 "hosts": [], 00:45:33.296 "serial_number": "SPDK00000000000002", 00:45:33.296 "model_number": "SPDK bdev Controller", 00:45:33.296 "max_namespaces": 32, 00:45:33.296 "min_cntlid": 1, 00:45:33.296 "max_cntlid": 65519, 00:45:33.296 "namespaces": [ 00:45:33.296 { 00:45:33.296 "nsid": 1, 00:45:33.296 "bdev_name": "Null2", 00:45:33.296 "name": "Null2", 00:45:33.296 "nguid": "C746F6D7D0654B3DA22BAB658041EA6B", 00:45:33.296 "uuid": "c746f6d7-d065-4b3d-a22b-ab658041ea6b" 00:45:33.296 } 00:45:33.296 ] 00:45:33.296 }, 00:45:33.296 { 00:45:33.296 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:45:33.296 "subtype": "NVMe", 00:45:33.296 "listen_addresses": [ 00:45:33.296 { 00:45:33.296 "trtype": "TCP", 00:45:33.296 "adrfam": "IPv4", 00:45:33.296 "traddr": "10.0.0.2", 00:45:33.296 "trsvcid": "4420" 00:45:33.296 } 00:45:33.296 ], 00:45:33.296 "allow_any_host": true, 00:45:33.296 "hosts": [], 00:45:33.296 "serial_number": "SPDK00000000000003", 00:45:33.296 "model_number": "SPDK bdev Controller", 00:45:33.296 "max_namespaces": 32, 00:45:33.296 "min_cntlid": 1, 00:45:33.296 "max_cntlid": 65519, 00:45:33.296 "namespaces": [ 00:45:33.296 { 00:45:33.296 "nsid": 1, 00:45:33.296 "bdev_name": "Null3", 00:45:33.296 "name": "Null3", 00:45:33.296 "nguid": "F6BB72B9F4FF4574949429EA0F15370E", 00:45:33.296 "uuid": "f6bb72b9-f4ff-4574-9494-29ea0f15370e" 00:45:33.296 } 00:45:33.296 ] 00:45:33.296 }, 00:45:33.296 { 00:45:33.296 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:45:33.296 "subtype": "NVMe", 00:45:33.296 "listen_addresses": [ 00:45:33.296 { 00:45:33.296 "trtype": "TCP", 00:45:33.296 "adrfam": "IPv4", 00:45:33.296 "traddr": "10.0.0.2", 00:45:33.296 "trsvcid": "4420" 00:45:33.296 } 00:45:33.296 ], 00:45:33.296 "allow_any_host": true, 00:45:33.296 "hosts": [], 00:45:33.296 "serial_number": "SPDK00000000000004", 00:45:33.296 "model_number": "SPDK bdev Controller", 00:45:33.296 "max_namespaces": 32, 00:45:33.296 "min_cntlid": 1, 00:45:33.296 "max_cntlid": 65519, 00:45:33.296 "namespaces": [ 00:45:33.296 { 00:45:33.296 "nsid": 1, 00:45:33.296 "bdev_name": "Null4", 00:45:33.296 "name": "Null4", 00:45:33.296 "nguid": "2AFE890BF8A34DFF80C4E391E8671029", 00:45:33.296 "uuid": "2afe890b-f8a3-4dff-80c4-e391e8671029" 00:45:33.296 } 00:45:33.296 ] 00:45:33.296 } 00:45:33.296 ] 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:45:33.296 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:45:33.297 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:33.558 rmmod nvme_tcp 00:45:33.558 rmmod nvme_fabrics 00:45:33.558 rmmod nvme_keyring 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2333326 ']' 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2333326 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2333326 ']' 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2333326 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2333326 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2333326' 00:45:33.558 killing process with pid 2333326 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2333326 00:45:33.558 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2333326 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:33.819 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:36.365 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:36.365 00:45:36.365 real 0m10.557s 00:45:36.365 user 0m6.623s 00:45:36.365 sys 0m5.519s 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:36.365 ************************************ 00:45:36.365 END TEST nvmf_target_discovery 00:45:36.365 ************************************ 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:45:36.365 ************************************ 00:45:36.365 START TEST nvmf_referrals 00:45:36.365 ************************************ 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:45:36.365 * Looking for test storage... 00:45:36.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:36.365 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:36.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:36.365 --rc genhtml_branch_coverage=1 00:45:36.365 --rc genhtml_function_coverage=1 00:45:36.365 --rc genhtml_legend=1 00:45:36.365 --rc geninfo_all_blocks=1 00:45:36.365 --rc geninfo_unexecuted_blocks=1 00:45:36.365 00:45:36.365 ' 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:36.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:36.366 --rc genhtml_branch_coverage=1 00:45:36.366 --rc genhtml_function_coverage=1 00:45:36.366 --rc genhtml_legend=1 00:45:36.366 --rc geninfo_all_blocks=1 00:45:36.366 --rc geninfo_unexecuted_blocks=1 00:45:36.366 00:45:36.366 ' 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:36.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:36.366 --rc genhtml_branch_coverage=1 00:45:36.366 --rc genhtml_function_coverage=1 00:45:36.366 --rc genhtml_legend=1 00:45:36.366 --rc geninfo_all_blocks=1 00:45:36.366 --rc geninfo_unexecuted_blocks=1 00:45:36.366 00:45:36.366 ' 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:36.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:36.366 --rc genhtml_branch_coverage=1 00:45:36.366 --rc genhtml_function_coverage=1 00:45:36.366 --rc genhtml_legend=1 00:45:36.366 --rc geninfo_all_blocks=1 00:45:36.366 --rc geninfo_unexecuted_blocks=1 00:45:36.366 00:45:36.366 ' 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:36.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:45:36.366 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:45:44.505 Found 0000:af:00.0 (0x8086 - 0x159b) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:45:44.505 Found 0000:af:00.1 (0x8086 - 0x159b) 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:44.505 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:45:44.506 Found net devices under 0000:af:00.0: cvl_0_0 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:45:44.506 Found net devices under 0000:af:00.1: cvl_0_1 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:44.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:44.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:45:44.506 00:45:44.506 --- 10.0.0.2 ping statistics --- 00:45:44.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:44.506 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:44.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:44.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:45:44.506 00:45:44.506 --- 10.0.0.1 ping statistics --- 00:45:44.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:44.506 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2336766 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2336766 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2336766 ']' 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:44.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.506 [2024-12-09 10:56:44.581320] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:45:44.506 [2024-12-09 10:56:44.581408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:44.506 [2024-12-09 10:56:44.716875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:44.506 [2024-12-09 10:56:44.773195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:44.506 [2024-12-09 10:56:44.773243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:44.506 [2024-12-09 10:56:44.773258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:44.506 [2024-12-09 10:56:44.773272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:44.506 [2024-12-09 10:56:44.773285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:44.506 [2024-12-09 10:56:44.774990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:44.506 [2024-12-09 10:56:44.775074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:44.506 [2024-12-09 10:56:44.775168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:44.506 [2024-12-09 10:56:44.775172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.506 [2024-12-09 10:56:44.935200] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.506 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 [2024-12-09 10:56:44.961879] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:45:44.507 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:45:44.767 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:45:44.767 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:45:44.767 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:45:44.767 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:45:44.767 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:45:44.767 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:45:44.767 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:45:45.027 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:45:45.027 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:45:45.027 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:45:45.027 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:45:45.027 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:45:45.027 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:45:45.287 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:45:45.547 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:45:45.807 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:46.066 rmmod nvme_tcp 00:45:46.066 rmmod nvme_fabrics 00:45:46.066 rmmod nvme_keyring 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2336766 ']' 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2336766 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2336766 ']' 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2336766 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:45:46.066 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:46.067 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2336766 00:45:46.327 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:46.327 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:46.327 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2336766' 00:45:46.327 killing process with pid 2336766 00:45:46.327 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2336766 00:45:46.327 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2336766 00:45:46.587 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:46.587 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:46.588 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:46.588 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:45:46.588 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:45:46.588 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:46.588 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:45:46.588 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:46.588 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:46.588 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:46.588 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:46.588 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:48.498 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:48.498 00:45:48.498 real 0m12.542s 00:45:48.498 user 0m14.169s 00:45:48.498 sys 0m6.089s 00:45:48.498 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:48.498 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:45:48.498 ************************************ 00:45:48.498 END TEST nvmf_referrals 00:45:48.498 ************************************ 00:45:48.758 10:56:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:45:48.758 10:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:48.758 10:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:48.758 10:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:45:48.759 ************************************ 00:45:48.759 START TEST nvmf_connect_disconnect 00:45:48.759 ************************************ 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:45:48.759 * Looking for test storage... 00:45:48.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:48.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.759 --rc genhtml_branch_coverage=1 00:45:48.759 --rc genhtml_function_coverage=1 00:45:48.759 --rc genhtml_legend=1 00:45:48.759 --rc geninfo_all_blocks=1 00:45:48.759 --rc geninfo_unexecuted_blocks=1 00:45:48.759 00:45:48.759 ' 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:48.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.759 --rc genhtml_branch_coverage=1 00:45:48.759 --rc genhtml_function_coverage=1 00:45:48.759 --rc genhtml_legend=1 00:45:48.759 --rc geninfo_all_blocks=1 00:45:48.759 --rc geninfo_unexecuted_blocks=1 00:45:48.759 00:45:48.759 ' 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:48.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.759 --rc genhtml_branch_coverage=1 00:45:48.759 --rc genhtml_function_coverage=1 00:45:48.759 --rc genhtml_legend=1 00:45:48.759 --rc geninfo_all_blocks=1 00:45:48.759 --rc geninfo_unexecuted_blocks=1 00:45:48.759 00:45:48.759 ' 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:48.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.759 --rc genhtml_branch_coverage=1 00:45:48.759 --rc genhtml_function_coverage=1 00:45:48.759 --rc genhtml_legend=1 00:45:48.759 --rc geninfo_all_blocks=1 00:45:48.759 --rc geninfo_unexecuted_blocks=1 00:45:48.759 00:45:48.759 ' 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:48.759 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:49.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:49.019 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:45:49.020 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:45:55.600 Found 0000:af:00.0 (0x8086 - 0x159b) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:45:55.600 Found 0000:af:00.1 (0x8086 - 0x159b) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:45:55.600 Found net devices under 0000:af:00.0: cvl_0_0 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:45:55.600 Found net devices under 0000:af:00.1: cvl_0_1 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:55.600 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:55.601 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:55.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:55.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:45:55.861 00:45:55.861 --- 10.0.0.2 ping statistics --- 00:45:55.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.861 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:55.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:55.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:45:55.861 00:45:55.861 --- 10.0.0.1 ping statistics --- 00:45:55.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.861 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2340450 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2340450 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2340450 ']' 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:55.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:55.861 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:45:55.861 [2024-12-09 10:56:56.907637] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:45:55.861 [2024-12-09 10:56:56.907723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:56.122 [2024-12-09 10:56:57.041765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:56.122 [2024-12-09 10:56:57.095876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:56.122 [2024-12-09 10:56:57.095932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:56.122 [2024-12-09 10:56:57.095948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:56.122 [2024-12-09 10:56:57.095963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:56.122 [2024-12-09 10:56:57.095974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:56.122 [2024-12-09 10:56:57.097900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:56.122 [2024-12-09 10:56:57.097990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:56.122 [2024-12-09 10:56:57.098080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:56.122 [2024-12-09 10:56:57.098086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:45:56.122 [2024-12-09 10:56:57.261888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.122 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:45:56.382 [2024-12-09 10:56:57.334275] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:45:56.382 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:45:59.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:02.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:05.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:08.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:11.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:11.338 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:46:11.338 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:46:11.338 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:11.338 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:46:11.338 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:11.338 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:46:11.338 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:11.338 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:11.338 rmmod nvme_tcp 00:46:11.338 rmmod nvme_fabrics 00:46:11.338 rmmod nvme_keyring 00:46:11.338 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2340450 ']' 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2340450 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2340450 ']' 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2340450 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2340450 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2340450' 00:46:11.598 killing process with pid 2340450 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2340450 00:46:11.598 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2340450 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:11.857 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:13.767 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:14.027 00:46:14.027 real 0m25.233s 00:46:14.027 user 1m4.066s 00:46:14.027 sys 0m7.095s 00:46:14.027 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:14.027 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:46:14.027 ************************************ 00:46:14.027 END TEST nvmf_connect_disconnect 00:46:14.027 ************************************ 00:46:14.027 10:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:46:14.027 10:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:14.027 10:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:14.027 10:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:46:14.027 ************************************ 00:46:14.027 START TEST nvmf_multitarget 00:46:14.027 ************************************ 00:46:14.027 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:46:14.027 * Looking for test storage... 00:46:14.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:14.027 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:14.027 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:46:14.027 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:14.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:14.288 --rc genhtml_branch_coverage=1 00:46:14.288 --rc genhtml_function_coverage=1 00:46:14.288 --rc genhtml_legend=1 00:46:14.288 --rc geninfo_all_blocks=1 00:46:14.288 --rc geninfo_unexecuted_blocks=1 00:46:14.288 00:46:14.288 ' 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:14.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:14.288 --rc genhtml_branch_coverage=1 00:46:14.288 --rc genhtml_function_coverage=1 00:46:14.288 --rc genhtml_legend=1 00:46:14.288 --rc geninfo_all_blocks=1 00:46:14.288 --rc geninfo_unexecuted_blocks=1 00:46:14.288 00:46:14.288 ' 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:14.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:14.288 --rc genhtml_branch_coverage=1 00:46:14.288 --rc genhtml_function_coverage=1 00:46:14.288 --rc genhtml_legend=1 00:46:14.288 --rc geninfo_all_blocks=1 00:46:14.288 --rc geninfo_unexecuted_blocks=1 00:46:14.288 00:46:14.288 ' 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:14.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:14.288 --rc genhtml_branch_coverage=1 00:46:14.288 --rc genhtml_function_coverage=1 00:46:14.288 --rc genhtml_legend=1 00:46:14.288 --rc geninfo_all_blocks=1 00:46:14.288 --rc geninfo_unexecuted_blocks=1 00:46:14.288 00:46:14.288 ' 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:14.288 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:14.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:46:14.289 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:20.865 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:46:20.866 Found 0000:af:00.0 (0x8086 - 0x159b) 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:46:20.866 Found 0000:af:00.1 (0x8086 - 0x159b) 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:46:20.866 Found net devices under 0000:af:00.0: cvl_0_0 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:46:20.866 Found net devices under 0000:af:00.1: cvl_0_1 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:20.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:20.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:46:20.866 00:46:20.866 --- 10.0.0.2 ping statistics --- 00:46:20.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:20.866 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:20.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:20.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:46:20.866 00:46:20.866 --- 10.0.0.1 ping statistics --- 00:46:20.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:20.866 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2346287 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2346287 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2346287 ']' 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:20.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:20.866 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:46:20.866 [2024-12-09 10:57:21.916006] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:46:20.866 [2024-12-09 10:57:21.916079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:21.127 [2024-12-09 10:57:22.047105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:21.127 [2024-12-09 10:57:22.099618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:21.127 [2024-12-09 10:57:22.099673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:21.127 [2024-12-09 10:57:22.099689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:21.127 [2024-12-09 10:57:22.099704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:21.127 [2024-12-09 10:57:22.099716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:21.127 [2024-12-09 10:57:22.101493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:21.127 [2024-12-09 10:57:22.101580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:21.127 [2024-12-09 10:57:22.101673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:21.127 [2024-12-09 10:57:22.101677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:21.127 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:21.127 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:46:21.127 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:21.127 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:21.127 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:46:21.127 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:21.127 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:46:21.127 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:46:21.127 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:46:21.387 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:46:21.387 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:46:21.387 "nvmf_tgt_1" 00:46:21.387 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:46:21.647 "nvmf_tgt_2" 00:46:21.647 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:46:21.647 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:46:21.906 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:46:21.907 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:46:21.907 true 00:46:21.907 10:57:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:46:22.166 true 00:46:22.166 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:46:22.166 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:46:22.166 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:46:22.166 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:46:22.166 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:46:22.166 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:22.167 rmmod nvme_tcp 00:46:22.167 rmmod nvme_fabrics 00:46:22.167 rmmod nvme_keyring 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2346287 ']' 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2346287 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2346287 ']' 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2346287 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:22.167 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2346287 00:46:22.427 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:22.427 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:22.427 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2346287' 00:46:22.427 killing process with pid 2346287 00:46:22.427 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2346287 00:46:22.427 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2346287 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:22.686 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:24.596 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:24.596 00:46:24.596 real 0m10.711s 00:46:24.596 user 0m9.211s 00:46:24.596 sys 0m5.369s 00:46:24.596 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:24.596 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:46:24.596 ************************************ 00:46:24.596 END TEST nvmf_multitarget 00:46:24.596 ************************************ 00:46:24.857 10:57:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:46:24.857 10:57:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:24.857 10:57:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:24.857 10:57:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:46:24.857 ************************************ 00:46:24.857 START TEST nvmf_rpc 00:46:24.857 ************************************ 00:46:24.857 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:46:24.857 * Looking for test storage... 00:46:24.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:24.857 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:24.857 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:46:24.857 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:24.857 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:25.117 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:46:25.117 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:46:25.117 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:25.117 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:46:25.117 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:46:25.117 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:46:25.117 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:46:25.117 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:25.117 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:25.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:25.118 --rc genhtml_branch_coverage=1 00:46:25.118 --rc genhtml_function_coverage=1 00:46:25.118 --rc genhtml_legend=1 00:46:25.118 --rc geninfo_all_blocks=1 00:46:25.118 --rc geninfo_unexecuted_blocks=1 00:46:25.118 00:46:25.118 ' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:25.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:25.118 --rc genhtml_branch_coverage=1 00:46:25.118 --rc genhtml_function_coverage=1 00:46:25.118 --rc genhtml_legend=1 00:46:25.118 --rc geninfo_all_blocks=1 00:46:25.118 --rc geninfo_unexecuted_blocks=1 00:46:25.118 00:46:25.118 ' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:25.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:25.118 --rc genhtml_branch_coverage=1 00:46:25.118 --rc genhtml_function_coverage=1 00:46:25.118 --rc genhtml_legend=1 00:46:25.118 --rc geninfo_all_blocks=1 00:46:25.118 --rc geninfo_unexecuted_blocks=1 00:46:25.118 00:46:25.118 ' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:25.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:25.118 --rc genhtml_branch_coverage=1 00:46:25.118 --rc genhtml_function_coverage=1 00:46:25.118 --rc genhtml_legend=1 00:46:25.118 --rc geninfo_all_blocks=1 00:46:25.118 --rc geninfo_unexecuted_blocks=1 00:46:25.118 00:46:25.118 ' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:25.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:46:25.118 10:57:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:46:33.251 Found 0000:af:00.0 (0x8086 - 0x159b) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:46:33.251 Found 0000:af:00.1 (0x8086 - 0x159b) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:46:33.251 Found net devices under 0000:af:00.0: cvl_0_0 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:46:33.251 Found net devices under 0000:af:00.1: cvl_0_1 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:33.251 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:33.251 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:33.251 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:33.251 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:33.251 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:33.251 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:33.251 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:33.251 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:33.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:33.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:46:33.252 00:46:33.252 --- 10.0.0.2 ping statistics --- 00:46:33.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:33.252 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:33.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:33.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:46:33.252 00:46:33.252 --- 10.0.0.1 ping statistics --- 00:46:33.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:33.252 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2349766 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2349766 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2349766 ']' 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:33.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:33.252 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.252 [2024-12-09 10:57:33.263968] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:46:33.252 [2024-12-09 10:57:33.264047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:33.252 [2024-12-09 10:57:33.400751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:33.252 [2024-12-09 10:57:33.456379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:33.252 [2024-12-09 10:57:33.456428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:33.252 [2024-12-09 10:57:33.456444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:33.252 [2024-12-09 10:57:33.456459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:33.252 [2024-12-09 10:57:33.456471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:33.252 [2024-12-09 10:57:33.458424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:33.252 [2024-12-09 10:57:33.458512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:33.252 [2024-12-09 10:57:33.458833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:33.252 [2024-12-09 10:57:33.458838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:46:33.252 "tick_rate": 2300000000, 00:46:33.252 "poll_groups": [ 00:46:33.252 { 00:46:33.252 "name": "nvmf_tgt_poll_group_000", 00:46:33.252 "admin_qpairs": 0, 00:46:33.252 "io_qpairs": 0, 00:46:33.252 "current_admin_qpairs": 0, 00:46:33.252 "current_io_qpairs": 0, 00:46:33.252 "pending_bdev_io": 0, 00:46:33.252 "completed_nvme_io": 0, 00:46:33.252 "transports": [] 00:46:33.252 }, 00:46:33.252 { 00:46:33.252 "name": "nvmf_tgt_poll_group_001", 00:46:33.252 "admin_qpairs": 0, 00:46:33.252 "io_qpairs": 0, 00:46:33.252 "current_admin_qpairs": 0, 00:46:33.252 "current_io_qpairs": 0, 00:46:33.252 "pending_bdev_io": 0, 00:46:33.252 "completed_nvme_io": 0, 00:46:33.252 "transports": [] 00:46:33.252 }, 00:46:33.252 { 00:46:33.252 "name": "nvmf_tgt_poll_group_002", 00:46:33.252 "admin_qpairs": 0, 00:46:33.252 "io_qpairs": 0, 00:46:33.252 "current_admin_qpairs": 0, 00:46:33.252 "current_io_qpairs": 0, 00:46:33.252 "pending_bdev_io": 0, 00:46:33.252 "completed_nvme_io": 0, 00:46:33.252 "transports": [] 00:46:33.252 }, 00:46:33.252 { 00:46:33.252 "name": "nvmf_tgt_poll_group_003", 00:46:33.252 "admin_qpairs": 0, 00:46:33.252 "io_qpairs": 0, 00:46:33.252 "current_admin_qpairs": 0, 00:46:33.252 "current_io_qpairs": 0, 00:46:33.252 "pending_bdev_io": 0, 00:46:33.252 "completed_nvme_io": 0, 00:46:33.252 "transports": [] 00:46:33.252 } 00:46:33.252 ] 00:46:33.252 }' 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.252 [2024-12-09 10:57:34.364990] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:46:33.252 "tick_rate": 2300000000, 00:46:33.252 "poll_groups": [ 00:46:33.252 { 00:46:33.252 "name": "nvmf_tgt_poll_group_000", 00:46:33.252 "admin_qpairs": 0, 00:46:33.252 "io_qpairs": 0, 00:46:33.252 "current_admin_qpairs": 0, 00:46:33.252 "current_io_qpairs": 0, 00:46:33.252 "pending_bdev_io": 0, 00:46:33.252 "completed_nvme_io": 0, 00:46:33.252 "transports": [ 00:46:33.252 { 00:46:33.252 "trtype": "TCP" 00:46:33.252 } 00:46:33.252 ] 00:46:33.252 }, 00:46:33.252 { 00:46:33.252 "name": "nvmf_tgt_poll_group_001", 00:46:33.252 "admin_qpairs": 0, 00:46:33.252 "io_qpairs": 0, 00:46:33.252 "current_admin_qpairs": 0, 00:46:33.252 "current_io_qpairs": 0, 00:46:33.252 "pending_bdev_io": 0, 00:46:33.252 "completed_nvme_io": 0, 00:46:33.252 "transports": [ 00:46:33.252 { 00:46:33.252 "trtype": "TCP" 00:46:33.252 } 00:46:33.252 ] 00:46:33.252 }, 00:46:33.252 { 00:46:33.252 "name": "nvmf_tgt_poll_group_002", 00:46:33.252 "admin_qpairs": 0, 00:46:33.252 "io_qpairs": 0, 00:46:33.252 "current_admin_qpairs": 0, 00:46:33.252 "current_io_qpairs": 0, 00:46:33.252 "pending_bdev_io": 0, 00:46:33.252 "completed_nvme_io": 0, 00:46:33.252 "transports": [ 00:46:33.252 { 00:46:33.252 "trtype": "TCP" 00:46:33.252 } 00:46:33.252 ] 00:46:33.252 }, 00:46:33.252 { 00:46:33.252 "name": "nvmf_tgt_poll_group_003", 00:46:33.252 "admin_qpairs": 0, 00:46:33.252 "io_qpairs": 0, 00:46:33.252 "current_admin_qpairs": 0, 00:46:33.252 "current_io_qpairs": 0, 00:46:33.252 "pending_bdev_io": 0, 00:46:33.252 "completed_nvme_io": 0, 00:46:33.252 "transports": [ 00:46:33.252 { 00:46:33.252 "trtype": "TCP" 00:46:33.252 } 00:46:33.252 ] 00:46:33.252 } 00:46:33.252 ] 00:46:33.252 }' 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:46:33.252 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:46:33.253 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.513 Malloc1 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.513 [2024-12-09 10:57:34.576291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -a 10.0.0.2 -s 4420 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -a 10.0.0.2 -s 4420 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -a 10.0.0.2 -s 4420 00:46:33.513 [2024-12-09 10:57:34.617005] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c' 00:46:33.513 Failed to write to /dev/nvme-fabrics: Input/output error 00:46:33.513 could not add new controller: failed to write to nvme-fabrics device 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.513 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:34.455 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:46:34.455 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:46:34.455 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:34.455 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:46:34.455 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:46:36.362 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:36.362 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:36.362 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:36.362 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:46:36.362 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:36.362 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:46:36.362 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:36.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:36.622 [2024-12-09 10:57:37.703166] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c' 00:46:36.622 Failed to write to /dev/nvme-fabrics: Input/output error 00:46:36.622 could not add new controller: failed to write to nvme-fabrics device 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.622 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:37.594 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:46:37.594 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:46:37.594 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:37.594 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:46:37.594 10:57:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:46:39.502 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:39.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:39.762 [2024-12-09 10:57:40.841811] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:39.762 10:57:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:40.702 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:46:40.702 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:46:40.702 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:40.702 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:46:40.702 10:57:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:46:42.614 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:42.614 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:42.614 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:42.614 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:46:42.614 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:42.614 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:46:42.614 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:42.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:42.875 [2024-12-09 10:57:43.920299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:42.875 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.876 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:46:42.876 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.876 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:42.876 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.876 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:46:42.876 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.876 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:42.876 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.876 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:43.817 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:46:43.817 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:46:43.817 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:43.817 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:46:43.817 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:46:45.729 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:45.729 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:45.729 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:45.729 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:46:45.729 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:45.729 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:46:45.729 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:45.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.989 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:45.989 [2024-12-09 10:57:47.016474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.989 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:46:45.990 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.990 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:45.990 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.990 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:46:45.990 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.990 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:45.990 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.990 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:46.930 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:46:46.930 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:46:46.930 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:46.930 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:46:46.930 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:46:48.841 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:48.841 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:48.841 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:48.841 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:46:48.841 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:48.841 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:46:48.841 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:49.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:49.100 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:49.101 [2024-12-09 10:57:50.135631] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:49.101 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:49.101 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:46:49.101 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:49.101 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:49.101 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:49.101 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:46:49.101 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:49.101 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:49.101 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:49.101 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:50.035 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:46:50.035 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:46:50.035 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:50.035 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:46:50.035 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:46:51.935 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:51.935 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:51.935 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:51.935 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:46:51.935 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:51.935 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:46:51.935 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:52.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:52.196 [2024-12-09 10:57:53.214559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:52.196 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:53.134 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:46:53.134 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:46:53.134 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:53.134 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:46:53.134 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:46:55.044 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:55.044 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:55.044 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:55.044 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:46:55.044 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:55.044 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:46:55.044 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:55.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.305 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 [2024-12-09 10:57:56.305361] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 [2024-12-09 10:57:56.361505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 [2024-12-09 10:57:56.417717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 [2024-12-09 10:57:56.465886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.306 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.567 [2024-12-09 10:57:56.518053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.567 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:46:55.568 "tick_rate": 2300000000, 00:46:55.568 "poll_groups": [ 00:46:55.568 { 00:46:55.568 "name": "nvmf_tgt_poll_group_000", 00:46:55.568 "admin_qpairs": 2, 00:46:55.568 "io_qpairs": 126, 00:46:55.568 "current_admin_qpairs": 0, 00:46:55.568 "current_io_qpairs": 0, 00:46:55.568 "pending_bdev_io": 0, 00:46:55.568 "completed_nvme_io": 210, 00:46:55.568 "transports": [ 00:46:55.568 { 00:46:55.568 "trtype": "TCP" 00:46:55.568 } 00:46:55.568 ] 00:46:55.568 }, 00:46:55.568 { 00:46:55.568 "name": "nvmf_tgt_poll_group_001", 00:46:55.568 "admin_qpairs": 2, 00:46:55.568 "io_qpairs": 126, 00:46:55.568 "current_admin_qpairs": 0, 00:46:55.568 "current_io_qpairs": 0, 00:46:55.568 "pending_bdev_io": 0, 00:46:55.568 "completed_nvme_io": 240, 00:46:55.568 "transports": [ 00:46:55.568 { 00:46:55.568 "trtype": "TCP" 00:46:55.568 } 00:46:55.568 ] 00:46:55.568 }, 00:46:55.568 { 00:46:55.568 "name": "nvmf_tgt_poll_group_002", 00:46:55.568 "admin_qpairs": 1, 00:46:55.568 "io_qpairs": 126, 00:46:55.568 "current_admin_qpairs": 0, 00:46:55.568 "current_io_qpairs": 0, 00:46:55.568 "pending_bdev_io": 0, 00:46:55.568 "completed_nvme_io": 177, 00:46:55.568 "transports": [ 00:46:55.568 { 00:46:55.568 "trtype": "TCP" 00:46:55.568 } 00:46:55.568 ] 00:46:55.568 }, 00:46:55.568 { 00:46:55.568 "name": "nvmf_tgt_poll_group_003", 00:46:55.568 "admin_qpairs": 2, 00:46:55.568 "io_qpairs": 126, 00:46:55.568 "current_admin_qpairs": 0, 00:46:55.568 "current_io_qpairs": 0, 00:46:55.568 "pending_bdev_io": 0, 00:46:55.568 "completed_nvme_io": 227, 00:46:55.568 "transports": [ 00:46:55.568 { 00:46:55.568 "trtype": "TCP" 00:46:55.568 } 00:46:55.568 ] 00:46:55.568 } 00:46:55.568 ] 00:46:55.568 }' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 504 > 0 )) 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:55.568 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:55.568 rmmod nvme_tcp 00:46:55.568 rmmod nvme_fabrics 00:46:55.568 rmmod nvme_keyring 00:46:55.828 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:55.828 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:46:55.828 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:46:55.828 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2349766 ']' 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2349766 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2349766 ']' 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2349766 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2349766 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2349766' 00:46:55.829 killing process with pid 2349766 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2349766 00:46:55.829 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2349766 00:46:56.089 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:56.089 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:56.089 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:56.090 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:46:56.090 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:56.090 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:46:56.090 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:46:56.090 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:56.090 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:56.090 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:56.090 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:56.090 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:58.004 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:58.265 00:46:58.265 real 0m33.350s 00:46:58.265 user 1m35.836s 00:46:58.265 sys 0m8.201s 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:58.265 ************************************ 00:46:58.265 END TEST nvmf_rpc 00:46:58.265 ************************************ 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:46:58.265 ************************************ 00:46:58.265 START TEST nvmf_invalid 00:46:58.265 ************************************ 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:46:58.265 * Looking for test storage... 00:46:58.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:58.265 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:58.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:58.526 --rc genhtml_branch_coverage=1 00:46:58.526 --rc genhtml_function_coverage=1 00:46:58.526 --rc genhtml_legend=1 00:46:58.526 --rc geninfo_all_blocks=1 00:46:58.526 --rc geninfo_unexecuted_blocks=1 00:46:58.526 00:46:58.526 ' 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:58.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:58.526 --rc genhtml_branch_coverage=1 00:46:58.526 --rc genhtml_function_coverage=1 00:46:58.526 --rc genhtml_legend=1 00:46:58.526 --rc geninfo_all_blocks=1 00:46:58.526 --rc geninfo_unexecuted_blocks=1 00:46:58.526 00:46:58.526 ' 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:58.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:58.526 --rc genhtml_branch_coverage=1 00:46:58.526 --rc genhtml_function_coverage=1 00:46:58.526 --rc genhtml_legend=1 00:46:58.526 --rc geninfo_all_blocks=1 00:46:58.526 --rc geninfo_unexecuted_blocks=1 00:46:58.526 00:46:58.526 ' 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:58.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:58.526 --rc genhtml_branch_coverage=1 00:46:58.526 --rc genhtml_function_coverage=1 00:46:58.526 --rc genhtml_legend=1 00:46:58.526 --rc geninfo_all_blocks=1 00:46:58.526 --rc geninfo_unexecuted_blocks=1 00:46:58.526 00:46:58.526 ' 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:58.526 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:58.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:46:58.527 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:47:05.110 Found 0000:af:00.0 (0x8086 - 0x159b) 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:05.110 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:47:05.111 Found 0000:af:00.1 (0x8086 - 0x159b) 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:47:05.111 Found net devices under 0000:af:00.0: cvl_0_0 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:47:05.111 Found net devices under 0000:af:00.1: cvl_0_1 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:05.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:05.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:47:05.111 00:47:05.111 --- 10.0.0.2 ping statistics --- 00:47:05.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:05.111 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:05.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:05.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:47:05.111 00:47:05.111 --- 10.0.0.1 ping statistics --- 00:47:05.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:05.111 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2356132 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2356132 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2356132 ']' 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:05.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:05.111 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:47:05.111 [2024-12-09 10:58:05.611269] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:47:05.111 [2024-12-09 10:58:05.611326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:05.111 [2024-12-09 10:58:05.724273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:05.111 [2024-12-09 10:58:05.781530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:05.111 [2024-12-09 10:58:05.781578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:05.111 [2024-12-09 10:58:05.781594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:05.111 [2024-12-09 10:58:05.781607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:05.111 [2024-12-09 10:58:05.781619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:05.111 [2024-12-09 10:58:05.783519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:05.111 [2024-12-09 10:58:05.783607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:05.111 [2024-12-09 10:58:05.783699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:05.111 [2024-12-09 10:58:05.783704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:05.372 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:05.372 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:47:05.372 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:05.372 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:05.633 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:47:05.633 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:05.633 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:47:05.633 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5606 00:47:05.893 [2024-12-09 10:58:06.859744] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:47:05.893 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:47:05.893 { 00:47:05.893 "nqn": "nqn.2016-06.io.spdk:cnode5606", 00:47:05.893 "tgt_name": "foobar", 00:47:05.893 "method": "nvmf_create_subsystem", 00:47:05.893 "req_id": 1 00:47:05.893 } 00:47:05.893 Got JSON-RPC error response 00:47:05.893 response: 00:47:05.893 { 00:47:05.893 "code": -32603, 00:47:05.893 "message": "Unable to find target foobar" 00:47:05.893 }' 00:47:05.893 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:47:05.893 { 00:47:05.893 "nqn": "nqn.2016-06.io.spdk:cnode5606", 00:47:05.893 "tgt_name": "foobar", 00:47:05.893 "method": "nvmf_create_subsystem", 00:47:05.893 "req_id": 1 00:47:05.893 } 00:47:05.893 Got JSON-RPC error response 00:47:05.893 response: 00:47:05.893 { 00:47:05.893 "code": -32603, 00:47:05.893 "message": "Unable to find target foobar" 00:47:05.893 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:47:05.893 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:47:05.893 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18306 00:47:06.154 [2024-12-09 10:58:07.144702] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18306: invalid serial number 'SPDKISFASTANDAWESOME' 00:47:06.154 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:47:06.154 { 00:47:06.154 "nqn": "nqn.2016-06.io.spdk:cnode18306", 00:47:06.154 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:47:06.154 "method": "nvmf_create_subsystem", 00:47:06.154 "req_id": 1 00:47:06.154 } 00:47:06.154 Got JSON-RPC error response 00:47:06.154 response: 00:47:06.154 { 00:47:06.154 "code": -32602, 00:47:06.154 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:47:06.154 }' 00:47:06.154 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:47:06.154 { 00:47:06.154 "nqn": "nqn.2016-06.io.spdk:cnode18306", 00:47:06.154 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:47:06.154 "method": "nvmf_create_subsystem", 00:47:06.154 "req_id": 1 00:47:06.154 } 00:47:06.154 Got JSON-RPC error response 00:47:06.154 response: 00:47:06.154 { 00:47:06.154 "code": -32602, 00:47:06.154 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:47:06.154 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:47:06.154 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:47:06.154 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1500 00:47:06.414 [2024-12-09 10:58:07.349334] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1500: invalid model number 'SPDK_Controller' 00:47:06.414 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:47:06.414 { 00:47:06.414 "nqn": "nqn.2016-06.io.spdk:cnode1500", 00:47:06.414 "model_number": "SPDK_Controller\u001f", 00:47:06.414 "method": "nvmf_create_subsystem", 00:47:06.414 "req_id": 1 00:47:06.414 } 00:47:06.414 Got JSON-RPC error response 00:47:06.414 response: 00:47:06.414 { 00:47:06.415 "code": -32602, 00:47:06.415 "message": "Invalid MN SPDK_Controller\u001f" 00:47:06.415 }' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:47:06.415 { 00:47:06.415 "nqn": "nqn.2016-06.io.spdk:cnode1500", 00:47:06.415 "model_number": "SPDK_Controller\u001f", 00:47:06.415 "method": "nvmf_create_subsystem", 00:47:06.415 "req_id": 1 00:47:06.415 } 00:47:06.415 Got JSON-RPC error response 00:47:06.415 response: 00:47:06.415 { 00:47:06.415 "code": -32602, 00:47:06.415 "message": "Invalid MN SPDK_Controller\u001f" 00:47:06.415 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.415 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.416 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:47:06.416 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:47:06.416 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:47:06.416 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.416 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.416 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:47:06.416 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@29 -- # string='\-MZR/+v`I]c]x4h@.)|`' 00:47:06.416 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\-MZR/+v`I]c]x4h@.)|`' 00:47:06.416 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\-MZR/+v`I]c]x4h@.)|`' nqn.2016-06.io.spdk:cnode31659 00:47:06.675 [2024-12-09 10:58:07.814879] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31659: invalid serial number '\-MZR/+v`I]c]x4h@.)|`' 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:47:06.675 { 00:47:06.675 "nqn": "nqn.2016-06.io.spdk:cnode31659", 00:47:06.675 "serial_number": "\\-MZR/+v`I]c]x4h@.)\u007f|`", 00:47:06.675 "method": "nvmf_create_subsystem", 00:47:06.675 "req_id": 1 00:47:06.675 } 00:47:06.675 Got JSON-RPC error response 00:47:06.675 response: 00:47:06.675 { 00:47:06.675 "code": -32602, 00:47:06.675 "message": "Invalid SN \\-MZR/+v`I]c]x4h@.)\u007f|`" 00:47:06.675 }' 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:47:06.675 { 00:47:06.675 "nqn": "nqn.2016-06.io.spdk:cnode31659", 00:47:06.675 "serial_number": "\\-MZR/+v`I]c]x4h@.)\u007f|`", 00:47:06.675 "method": "nvmf_create_subsystem", 00:47:06.675 "req_id": 1 00:47:06.675 } 00:47:06.675 Got JSON-RPC error response 00:47:06.675 response: 00:47:06.675 { 00:47:06.675 "code": -32602, 00:47:06.675 "message": "Invalid SN \\-MZR/+v`I]c]x4h@.)\u007f|`" 00:47:06.675 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:47:06.675 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:47:06.936 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:47:06.936 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.936 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:47:06.937 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:06.938 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:47:07.198 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 7 == \- ]] 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '71]~K+U2|qJ~!-cL545.UQCGvLz3XU_AIp>Uqnhn' 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '71]~K+U2|qJ~!-cL545.UQCGvLz3XU_AIp>Uqnhn' nqn.2016-06.io.spdk:cnode1916 00:47:07.199 [2024-12-09 10:58:08.352735] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1916: invalid model number '71]~K+U2|qJ~!-cL545.UQCGvLz3XU_AIp>Uqnhn' 00:47:07.199 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:47:07.199 { 00:47:07.199 "nqn": "nqn.2016-06.io.spdk:cnode1916", 00:47:07.199 "model_number": "71]~K+U2|\u007fqJ~!-cL545.UQCGvLz3XU_AIp>Uqnhn", 00:47:07.199 "method": "nvmf_create_subsystem", 00:47:07.199 "req_id": 1 00:47:07.199 } 00:47:07.199 Got JSON-RPC error response 00:47:07.199 response: 00:47:07.199 { 00:47:07.199 "code": -32602, 00:47:07.199 "message": "Invalid MN 71]~K+U2|\u007fqJ~!-cL545.UQCGvLz3XU_AIp>Uqnhn" 00:47:07.199 }' 00:47:07.458 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:47:07.458 { 00:47:07.458 "nqn": "nqn.2016-06.io.spdk:cnode1916", 00:47:07.458 "model_number": "71]~K+U2|\u007fqJ~!-cL545.UQCGvLz3XU_AIp>Uqnhn", 00:47:07.458 "method": "nvmf_create_subsystem", 00:47:07.458 "req_id": 1 00:47:07.458 } 00:47:07.458 Got JSON-RPC error response 00:47:07.458 response: 00:47:07.458 { 00:47:07.458 "code": -32602, 00:47:07.458 "message": "Invalid MN 71]~K+U2|\u007fqJ~!-cL545.UQCGvLz3XU_AIp>Uqnhn" 00:47:07.458 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:47:07.458 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:47:07.458 [2024-12-09 10:58:08.549543] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:07.458 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:47:07.718 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:47:07.718 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:47:07.718 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:47:07.718 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:47:07.718 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:47:07.977 [2024-12-09 10:58:09.067402] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:47:07.977 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:47:07.977 { 00:47:07.977 "nqn": "nqn.2016-06.io.spdk:cnode", 00:47:07.977 "listen_address": { 00:47:07.977 "trtype": "tcp", 00:47:07.977 "traddr": "", 00:47:07.977 "trsvcid": "4421" 00:47:07.977 }, 00:47:07.977 "method": "nvmf_subsystem_remove_listener", 00:47:07.977 "req_id": 1 00:47:07.977 } 00:47:07.977 Got JSON-RPC error response 00:47:07.977 response: 00:47:07.977 { 00:47:07.977 "code": -32602, 00:47:07.977 "message": "Invalid parameters" 00:47:07.977 }' 00:47:07.977 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:47:07.977 { 00:47:07.977 "nqn": "nqn.2016-06.io.spdk:cnode", 00:47:07.977 "listen_address": { 00:47:07.977 "trtype": "tcp", 00:47:07.977 "traddr": "", 00:47:07.977 "trsvcid": "4421" 00:47:07.977 }, 00:47:07.977 "method": "nvmf_subsystem_remove_listener", 00:47:07.977 "req_id": 1 00:47:07.977 } 00:47:07.977 Got JSON-RPC error response 00:47:07.977 response: 00:47:07.977 { 00:47:07.977 "code": -32602, 00:47:07.977 "message": "Invalid parameters" 00:47:07.977 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:47:07.977 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4071 -i 0 00:47:08.238 [2024-12-09 10:58:09.348265] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4071: invalid cntlid range [0-65519] 00:47:08.238 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:47:08.238 { 00:47:08.238 "nqn": "nqn.2016-06.io.spdk:cnode4071", 00:47:08.238 "min_cntlid": 0, 00:47:08.238 "method": "nvmf_create_subsystem", 00:47:08.238 "req_id": 1 00:47:08.238 } 00:47:08.238 Got JSON-RPC error response 00:47:08.238 response: 00:47:08.238 { 00:47:08.238 "code": -32602, 00:47:08.238 "message": "Invalid cntlid range [0-65519]" 00:47:08.238 }' 00:47:08.238 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:47:08.238 { 00:47:08.238 "nqn": "nqn.2016-06.io.spdk:cnode4071", 00:47:08.238 "min_cntlid": 0, 00:47:08.238 "method": "nvmf_create_subsystem", 00:47:08.238 "req_id": 1 00:47:08.238 } 00:47:08.238 Got JSON-RPC error response 00:47:08.238 response: 00:47:08.238 { 00:47:08.238 "code": -32602, 00:47:08.238 "message": "Invalid cntlid range [0-65519]" 00:47:08.238 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:47:08.238 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6682 -i 65520 00:47:08.498 [2024-12-09 10:58:09.561005] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6682: invalid cntlid range [65520-65519] 00:47:08.498 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:47:08.498 { 00:47:08.498 "nqn": "nqn.2016-06.io.spdk:cnode6682", 00:47:08.498 "min_cntlid": 65520, 00:47:08.498 "method": "nvmf_create_subsystem", 00:47:08.498 "req_id": 1 00:47:08.498 } 00:47:08.498 Got JSON-RPC error response 00:47:08.498 response: 00:47:08.498 { 00:47:08.498 "code": -32602, 00:47:08.498 "message": "Invalid cntlid range [65520-65519]" 00:47:08.498 }' 00:47:08.498 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:47:08.498 { 00:47:08.498 "nqn": "nqn.2016-06.io.spdk:cnode6682", 00:47:08.498 "min_cntlid": 65520, 00:47:08.498 "method": "nvmf_create_subsystem", 00:47:08.498 "req_id": 1 00:47:08.498 } 00:47:08.498 Got JSON-RPC error response 00:47:08.498 response: 00:47:08.498 { 00:47:08.498 "code": -32602, 00:47:08.498 "message": "Invalid cntlid range [65520-65519]" 00:47:08.498 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:47:08.498 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27562 -I 0 00:47:08.758 [2024-12-09 10:58:09.841986] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27562: invalid cntlid range [1-0] 00:47:08.758 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:47:08.758 { 00:47:08.758 "nqn": "nqn.2016-06.io.spdk:cnode27562", 00:47:08.758 "max_cntlid": 0, 00:47:08.758 "method": "nvmf_create_subsystem", 00:47:08.758 "req_id": 1 00:47:08.758 } 00:47:08.758 Got JSON-RPC error response 00:47:08.758 response: 00:47:08.758 { 00:47:08.758 "code": -32602, 00:47:08.758 "message": "Invalid cntlid range [1-0]" 00:47:08.758 }' 00:47:08.758 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:47:08.758 { 00:47:08.758 "nqn": "nqn.2016-06.io.spdk:cnode27562", 00:47:08.758 "max_cntlid": 0, 00:47:08.758 "method": "nvmf_create_subsystem", 00:47:08.758 "req_id": 1 00:47:08.758 } 00:47:08.758 Got JSON-RPC error response 00:47:08.758 response: 00:47:08.758 { 00:47:08.758 "code": -32602, 00:47:08.758 "message": "Invalid cntlid range [1-0]" 00:47:08.758 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:47:08.758 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22674 -I 65520 00:47:09.017 [2024-12-09 10:58:10.122937] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22674: invalid cntlid range [1-65520] 00:47:09.017 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:47:09.017 { 00:47:09.017 "nqn": "nqn.2016-06.io.spdk:cnode22674", 00:47:09.017 "max_cntlid": 65520, 00:47:09.017 "method": "nvmf_create_subsystem", 00:47:09.017 "req_id": 1 00:47:09.018 } 00:47:09.018 Got JSON-RPC error response 00:47:09.018 response: 00:47:09.018 { 00:47:09.018 "code": -32602, 00:47:09.018 "message": "Invalid cntlid range [1-65520]" 00:47:09.018 }' 00:47:09.018 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:47:09.018 { 00:47:09.018 "nqn": "nqn.2016-06.io.spdk:cnode22674", 00:47:09.018 "max_cntlid": 65520, 00:47:09.018 "method": "nvmf_create_subsystem", 00:47:09.018 "req_id": 1 00:47:09.018 } 00:47:09.018 Got JSON-RPC error response 00:47:09.018 response: 00:47:09.018 { 00:47:09.018 "code": -32602, 00:47:09.018 "message": "Invalid cntlid range [1-65520]" 00:47:09.018 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:47:09.018 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5384 -i 6 -I 5 00:47:09.278 [2024-12-09 10:58:10.395948] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5384: invalid cntlid range [6-5] 00:47:09.278 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:47:09.278 { 00:47:09.278 "nqn": "nqn.2016-06.io.spdk:cnode5384", 00:47:09.278 "min_cntlid": 6, 00:47:09.278 "max_cntlid": 5, 00:47:09.278 "method": "nvmf_create_subsystem", 00:47:09.278 "req_id": 1 00:47:09.278 } 00:47:09.278 Got JSON-RPC error response 00:47:09.278 response: 00:47:09.278 { 00:47:09.278 "code": -32602, 00:47:09.278 "message": "Invalid cntlid range [6-5]" 00:47:09.278 }' 00:47:09.278 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:47:09.278 { 00:47:09.278 "nqn": "nqn.2016-06.io.spdk:cnode5384", 00:47:09.278 "min_cntlid": 6, 00:47:09.278 "max_cntlid": 5, 00:47:09.278 "method": "nvmf_create_subsystem", 00:47:09.278 "req_id": 1 00:47:09.278 } 00:47:09.278 Got JSON-RPC error response 00:47:09.278 response: 00:47:09.278 { 00:47:09.278 "code": -32602, 00:47:09.278 "message": "Invalid cntlid range [6-5]" 00:47:09.278 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:47:09.278 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:47:09.539 { 00:47:09.539 "name": "foobar", 00:47:09.539 "method": "nvmf_delete_target", 00:47:09.539 "req_id": 1 00:47:09.539 } 00:47:09.539 Got JSON-RPC error response 00:47:09.539 response: 00:47:09.539 { 00:47:09.539 "code": -32602, 00:47:09.539 "message": "The specified target doesn'\''t exist, cannot delete it." 00:47:09.539 }' 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:47:09.539 { 00:47:09.539 "name": "foobar", 00:47:09.539 "method": "nvmf_delete_target", 00:47:09.539 "req_id": 1 00:47:09.539 } 00:47:09.539 Got JSON-RPC error response 00:47:09.539 response: 00:47:09.539 { 00:47:09.539 "code": -32602, 00:47:09.539 "message": "The specified target doesn't exist, cannot delete it." 00:47:09.539 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:09.539 rmmod nvme_tcp 00:47:09.539 rmmod nvme_fabrics 00:47:09.539 rmmod nvme_keyring 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2356132 ']' 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2356132 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2356132 ']' 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2356132 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2356132 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2356132' 00:47:09.539 killing process with pid 2356132 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2356132 00:47:09.539 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2356132 00:47:09.799 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:09.799 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:09.799 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:09.799 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:47:09.799 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:10.059 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:47:10.059 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:47:10.059 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:10.059 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:10.059 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:10.059 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:10.059 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:11.971 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:11.971 00:47:11.971 real 0m13.799s 00:47:11.971 user 0m25.868s 00:47:11.971 sys 0m5.625s 00:47:11.971 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:11.971 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:47:11.971 ************************************ 00:47:11.971 END TEST nvmf_invalid 00:47:11.971 ************************************ 00:47:11.971 10:58:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:47:11.971 10:58:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:11.971 10:58:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:11.971 10:58:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:12.232 ************************************ 00:47:12.232 START TEST nvmf_connect_stress 00:47:12.232 ************************************ 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:47:12.232 * Looking for test storage... 00:47:12.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:12.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:12.232 --rc genhtml_branch_coverage=1 00:47:12.232 --rc genhtml_function_coverage=1 00:47:12.232 --rc genhtml_legend=1 00:47:12.232 --rc geninfo_all_blocks=1 00:47:12.232 --rc geninfo_unexecuted_blocks=1 00:47:12.232 00:47:12.232 ' 00:47:12.232 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:12.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:12.232 --rc genhtml_branch_coverage=1 00:47:12.232 --rc genhtml_function_coverage=1 00:47:12.232 --rc genhtml_legend=1 00:47:12.232 --rc geninfo_all_blocks=1 00:47:12.232 --rc geninfo_unexecuted_blocks=1 00:47:12.232 00:47:12.232 ' 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:12.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:12.233 --rc genhtml_branch_coverage=1 00:47:12.233 --rc genhtml_function_coverage=1 00:47:12.233 --rc genhtml_legend=1 00:47:12.233 --rc geninfo_all_blocks=1 00:47:12.233 --rc geninfo_unexecuted_blocks=1 00:47:12.233 00:47:12.233 ' 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:12.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:12.233 --rc genhtml_branch_coverage=1 00:47:12.233 --rc genhtml_function_coverage=1 00:47:12.233 --rc genhtml_legend=1 00:47:12.233 --rc geninfo_all_blocks=1 00:47:12.233 --rc geninfo_unexecuted_blocks=1 00:47:12.233 00:47:12.233 ' 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:12.233 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:12.494 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:12.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:47:12.495 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:47:19.091 Found 0000:af:00.0 (0x8086 - 0x159b) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:47:19.091 Found 0000:af:00.1 (0x8086 - 0x159b) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:47:19.091 Found net devices under 0000:af:00.0: cvl_0_0 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:47:19.091 Found net devices under 0000:af:00.1: cvl_0_1 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:19.091 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:19.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:19.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:47:19.092 00:47:19.092 --- 10.0.0.2 ping statistics --- 00:47:19.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:19.092 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:19.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:19.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:47:19.092 00:47:19.092 --- 10.0.0.1 ping statistics --- 00:47:19.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:19.092 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2360076 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2360076 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2360076 ']' 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:19.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:19.092 [2024-12-09 10:58:19.717775] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:47:19.092 [2024-12-09 10:58:19.717855] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:19.092 [2024-12-09 10:58:19.820571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:19.092 [2024-12-09 10:58:19.864303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:19.092 [2024-12-09 10:58:19.864346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:19.092 [2024-12-09 10:58:19.864358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:19.092 [2024-12-09 10:58:19.864367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:19.092 [2024-12-09 10:58:19.864375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:19.092 [2024-12-09 10:58:19.865735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:19.092 [2024-12-09 10:58:19.865823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:19.092 [2024-12-09 10:58:19.865826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:19.092 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:19.092 [2024-12-09 10:58:20.016248] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:19.092 [2024-12-09 10:58:20.033069] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:19.092 NULL1 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2360105 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.092 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.093 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:19.353 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.353 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:19.353 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:19.353 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.353 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:19.924 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.924 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:19.924 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:19.924 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.924 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:20.184 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:20.184 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:20.184 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:20.184 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:20.184 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:20.444 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:20.444 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:20.444 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:20.444 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:20.444 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:20.705 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:20.705 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:20.705 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:20.705 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:20.705 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:20.965 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:20.965 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:20.965 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:20.965 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:20.965 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:21.536 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:21.536 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:21.536 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:21.536 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:21.536 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:21.795 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:21.795 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:21.795 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:21.795 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:21.796 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:22.054 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:22.054 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:22.055 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:22.055 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:22.055 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:22.314 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:22.314 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:22.314 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:22.314 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:22.314 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:22.575 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:22.575 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:22.575 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:22.575 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:22.575 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:23.145 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:23.145 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:23.145 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:23.145 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:23.145 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:23.405 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:23.405 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:23.405 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:23.405 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:23.405 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:23.665 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:23.665 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:23.665 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:23.665 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:23.665 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:23.925 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:23.925 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:23.925 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:23.925 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:23.925 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:24.184 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:24.184 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:24.184 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:24.184 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:24.184 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:24.755 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:24.755 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:24.755 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:24.755 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:24.755 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:25.015 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:25.015 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:25.015 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:25.015 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:25.015 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:25.275 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:25.275 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:25.275 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:25.275 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:25.275 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:25.535 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:25.535 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:25.535 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:25.535 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:25.535 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:25.795 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:25.795 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:25.795 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:25.795 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:25.795 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:26.364 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:26.364 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:26.364 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:26.364 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:26.364 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:26.624 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:26.624 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:26.624 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:26.624 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:26.624 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:26.884 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:26.884 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:26.884 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:26.884 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:26.884 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:27.144 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:27.144 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:27.144 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:27.145 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:27.145 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:27.405 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:27.405 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:27.405 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:27.405 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:27.405 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:27.975 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:27.975 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:27.975 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:27.975 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:27.975 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:28.237 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:28.237 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:28.237 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:28.237 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:28.237 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:28.497 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:28.497 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:28.497 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:28.497 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:28.497 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:28.758 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:28.758 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:28.758 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:28.758 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:28.758 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:29.019 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:29.019 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:29.019 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:47:29.019 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:29.019 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:29.279 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2360105 00:47:29.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2360105) - No such process 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2360105 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:29.540 rmmod nvme_tcp 00:47:29.540 rmmod nvme_fabrics 00:47:29.540 rmmod nvme_keyring 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2360076 ']' 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2360076 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2360076 ']' 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2360076 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2360076 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2360076' 00:47:29.540 killing process with pid 2360076 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2360076 00:47:29.540 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2360076 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:29.801 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:32.345 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:32.345 00:47:32.345 real 0m19.747s 00:47:32.345 user 0m41.666s 00:47:32.345 sys 0m8.084s 00:47:32.345 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:32.345 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:47:32.345 ************************************ 00:47:32.345 END TEST nvmf_connect_stress 00:47:32.345 ************************************ 00:47:32.345 10:58:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:47:32.345 10:58:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:32.345 10:58:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:32.345 10:58:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:32.345 ************************************ 00:47:32.345 START TEST nvmf_fused_ordering 00:47:32.345 ************************************ 00:47:32.345 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:47:32.345 * Looking for test storage... 00:47:32.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:32.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.345 --rc genhtml_branch_coverage=1 00:47:32.345 --rc genhtml_function_coverage=1 00:47:32.345 --rc genhtml_legend=1 00:47:32.345 --rc geninfo_all_blocks=1 00:47:32.345 --rc geninfo_unexecuted_blocks=1 00:47:32.345 00:47:32.345 ' 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:32.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.345 --rc genhtml_branch_coverage=1 00:47:32.345 --rc genhtml_function_coverage=1 00:47:32.345 --rc genhtml_legend=1 00:47:32.345 --rc geninfo_all_blocks=1 00:47:32.345 --rc geninfo_unexecuted_blocks=1 00:47:32.345 00:47:32.345 ' 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:32.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.345 --rc genhtml_branch_coverage=1 00:47:32.345 --rc genhtml_function_coverage=1 00:47:32.345 --rc genhtml_legend=1 00:47:32.345 --rc geninfo_all_blocks=1 00:47:32.345 --rc geninfo_unexecuted_blocks=1 00:47:32.345 00:47:32.345 ' 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:32.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.345 --rc genhtml_branch_coverage=1 00:47:32.345 --rc genhtml_function_coverage=1 00:47:32.345 --rc genhtml_legend=1 00:47:32.345 --rc geninfo_all_blocks=1 00:47:32.345 --rc geninfo_unexecuted_blocks=1 00:47:32.345 00:47:32.345 ' 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:47:32.345 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:32.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:47:32.346 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:38.928 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:38.928 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:47:38.928 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:38.928 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:38.928 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:38.928 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:38.928 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:38.928 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:47:38.928 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:38.928 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:47:38.929 Found 0000:af:00.0 (0x8086 - 0x159b) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:47:38.929 Found 0000:af:00.1 (0x8086 - 0x159b) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:47:38.929 Found net devices under 0000:af:00.0: cvl_0_0 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:47:38.929 Found net devices under 0000:af:00.1: cvl_0_1 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:38.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:38.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:47:38.929 00:47:38.929 --- 10.0.0.2 ping statistics --- 00:47:38.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:38.929 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:47:38.929 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:38.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:38.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:47:38.929 00:47:38.929 --- 10.0.0.1 ping statistics --- 00:47:38.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:38.929 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2364603 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2364603 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2364603 ']' 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:38.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:38.930 [2024-12-09 10:58:39.485471] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:47:38.930 [2024-12-09 10:58:39.485526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:38.930 [2024-12-09 10:58:39.571323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:38.930 [2024-12-09 10:58:39.616416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:38.930 [2024-12-09 10:58:39.616458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:38.930 [2024-12-09 10:58:39.616469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:38.930 [2024-12-09 10:58:39.616478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:38.930 [2024-12-09 10:58:39.616487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:38.930 [2024-12-09 10:58:39.616961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:38.930 [2024-12-09 10:58:39.782683] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:38.930 [2024-12-09 10:58:39.806863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:38.930 NULL1 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:38.930 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:38.930 [2024-12-09 10:58:39.873946] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:47:38.930 [2024-12-09 10:58:39.873988] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364781 ] 00:47:39.501 Attached to nqn.2016-06.io.spdk:cnode1 00:47:39.501 Namespace ID: 1 size: 1GB 00:47:39.501 fused_ordering(0) 00:47:39.501 fused_ordering(1) 00:47:39.501 fused_ordering(2) 00:47:39.501 fused_ordering(3) 00:47:39.501 fused_ordering(4) 00:47:39.501 fused_ordering(5) 00:47:39.501 fused_ordering(6) 00:47:39.501 fused_ordering(7) 00:47:39.501 fused_ordering(8) 00:47:39.501 fused_ordering(9) 00:47:39.501 fused_ordering(10) 00:47:39.501 fused_ordering(11) 00:47:39.501 fused_ordering(12) 00:47:39.501 fused_ordering(13) 00:47:39.501 fused_ordering(14) 00:47:39.501 fused_ordering(15) 00:47:39.501 fused_ordering(16) 00:47:39.501 fused_ordering(17) 00:47:39.501 fused_ordering(18) 00:47:39.501 fused_ordering(19) 00:47:39.501 fused_ordering(20) 00:47:39.501 fused_ordering(21) 00:47:39.501 fused_ordering(22) 00:47:39.501 fused_ordering(23) 00:47:39.501 fused_ordering(24) 00:47:39.501 fused_ordering(25) 00:47:39.501 fused_ordering(26) 00:47:39.501 fused_ordering(27) 00:47:39.501 fused_ordering(28) 00:47:39.501 fused_ordering(29) 00:47:39.501 fused_ordering(30) 00:47:39.501 fused_ordering(31) 00:47:39.501 fused_ordering(32) 00:47:39.501 fused_ordering(33) 00:47:39.502 fused_ordering(34) 00:47:39.502 fused_ordering(35) 00:47:39.502 fused_ordering(36) 00:47:39.502 fused_ordering(37) 00:47:39.502 fused_ordering(38) 00:47:39.502 fused_ordering(39) 00:47:39.502 fused_ordering(40) 00:47:39.502 fused_ordering(41) 00:47:39.502 fused_ordering(42) 00:47:39.502 fused_ordering(43) 00:47:39.502 fused_ordering(44) 00:47:39.502 fused_ordering(45) 00:47:39.502 fused_ordering(46) 00:47:39.502 fused_ordering(47) 00:47:39.502 fused_ordering(48) 00:47:39.502 fused_ordering(49) 00:47:39.502 fused_ordering(50) 00:47:39.502 fused_ordering(51) 00:47:39.502 fused_ordering(52) 00:47:39.502 fused_ordering(53) 00:47:39.502 fused_ordering(54) 00:47:39.502 fused_ordering(55) 00:47:39.502 fused_ordering(56) 00:47:39.502 fused_ordering(57) 00:47:39.502 fused_ordering(58) 00:47:39.502 fused_ordering(59) 00:47:39.502 fused_ordering(60) 00:47:39.502 fused_ordering(61) 00:47:39.502 fused_ordering(62) 00:47:39.502 fused_ordering(63) 00:47:39.502 fused_ordering(64) 00:47:39.502 fused_ordering(65) 00:47:39.502 fused_ordering(66) 00:47:39.502 fused_ordering(67) 00:47:39.502 fused_ordering(68) 00:47:39.502 fused_ordering(69) 00:47:39.502 fused_ordering(70) 00:47:39.502 fused_ordering(71) 00:47:39.502 fused_ordering(72) 00:47:39.502 fused_ordering(73) 00:47:39.502 fused_ordering(74) 00:47:39.502 fused_ordering(75) 00:47:39.502 fused_ordering(76) 00:47:39.502 fused_ordering(77) 00:47:39.502 fused_ordering(78) 00:47:39.502 fused_ordering(79) 00:47:39.502 fused_ordering(80) 00:47:39.502 fused_ordering(81) 00:47:39.502 fused_ordering(82) 00:47:39.502 fused_ordering(83) 00:47:39.502 fused_ordering(84) 00:47:39.502 fused_ordering(85) 00:47:39.502 fused_ordering(86) 00:47:39.502 fused_ordering(87) 00:47:39.502 fused_ordering(88) 00:47:39.502 fused_ordering(89) 00:47:39.502 fused_ordering(90) 00:47:39.502 fused_ordering(91) 00:47:39.502 fused_ordering(92) 00:47:39.502 fused_ordering(93) 00:47:39.502 fused_ordering(94) 00:47:39.502 fused_ordering(95) 00:47:39.502 fused_ordering(96) 00:47:39.502 fused_ordering(97) 00:47:39.502 fused_ordering(98) 00:47:39.502 fused_ordering(99) 00:47:39.502 fused_ordering(100) 00:47:39.502 fused_ordering(101) 00:47:39.502 fused_ordering(102) 00:47:39.502 fused_ordering(103) 00:47:39.502 fused_ordering(104) 00:47:39.502 fused_ordering(105) 00:47:39.502 fused_ordering(106) 00:47:39.502 fused_ordering(107) 00:47:39.502 fused_ordering(108) 00:47:39.502 fused_ordering(109) 00:47:39.502 fused_ordering(110) 00:47:39.502 fused_ordering(111) 00:47:39.502 fused_ordering(112) 00:47:39.502 fused_ordering(113) 00:47:39.502 fused_ordering(114) 00:47:39.502 fused_ordering(115) 00:47:39.502 fused_ordering(116) 00:47:39.502 fused_ordering(117) 00:47:39.502 fused_ordering(118) 00:47:39.502 fused_ordering(119) 00:47:39.502 fused_ordering(120) 00:47:39.502 fused_ordering(121) 00:47:39.502 fused_ordering(122) 00:47:39.502 fused_ordering(123) 00:47:39.502 fused_ordering(124) 00:47:39.502 fused_ordering(125) 00:47:39.502 fused_ordering(126) 00:47:39.502 fused_ordering(127) 00:47:39.502 fused_ordering(128) 00:47:39.502 fused_ordering(129) 00:47:39.502 fused_ordering(130) 00:47:39.502 fused_ordering(131) 00:47:39.502 fused_ordering(132) 00:47:39.502 fused_ordering(133) 00:47:39.502 fused_ordering(134) 00:47:39.502 fused_ordering(135) 00:47:39.502 fused_ordering(136) 00:47:39.502 fused_ordering(137) 00:47:39.502 fused_ordering(138) 00:47:39.502 fused_ordering(139) 00:47:39.502 fused_ordering(140) 00:47:39.502 fused_ordering(141) 00:47:39.502 fused_ordering(142) 00:47:39.502 fused_ordering(143) 00:47:39.502 fused_ordering(144) 00:47:39.502 fused_ordering(145) 00:47:39.502 fused_ordering(146) 00:47:39.502 fused_ordering(147) 00:47:39.502 fused_ordering(148) 00:47:39.502 fused_ordering(149) 00:47:39.502 fused_ordering(150) 00:47:39.502 fused_ordering(151) 00:47:39.502 fused_ordering(152) 00:47:39.502 fused_ordering(153) 00:47:39.502 fused_ordering(154) 00:47:39.502 fused_ordering(155) 00:47:39.502 fused_ordering(156) 00:47:39.502 fused_ordering(157) 00:47:39.502 fused_ordering(158) 00:47:39.502 fused_ordering(159) 00:47:39.502 fused_ordering(160) 00:47:39.502 fused_ordering(161) 00:47:39.502 fused_ordering(162) 00:47:39.502 fused_ordering(163) 00:47:39.502 fused_ordering(164) 00:47:39.502 fused_ordering(165) 00:47:39.502 fused_ordering(166) 00:47:39.502 fused_ordering(167) 00:47:39.502 fused_ordering(168) 00:47:39.502 fused_ordering(169) 00:47:39.502 fused_ordering(170) 00:47:39.502 fused_ordering(171) 00:47:39.502 fused_ordering(172) 00:47:39.502 fused_ordering(173) 00:47:39.502 fused_ordering(174) 00:47:39.502 fused_ordering(175) 00:47:39.502 fused_ordering(176) 00:47:39.502 fused_ordering(177) 00:47:39.502 fused_ordering(178) 00:47:39.502 fused_ordering(179) 00:47:39.502 fused_ordering(180) 00:47:39.502 fused_ordering(181) 00:47:39.502 fused_ordering(182) 00:47:39.502 fused_ordering(183) 00:47:39.502 fused_ordering(184) 00:47:39.502 fused_ordering(185) 00:47:39.502 fused_ordering(186) 00:47:39.502 fused_ordering(187) 00:47:39.502 fused_ordering(188) 00:47:39.502 fused_ordering(189) 00:47:39.502 fused_ordering(190) 00:47:39.502 fused_ordering(191) 00:47:39.502 fused_ordering(192) 00:47:39.502 fused_ordering(193) 00:47:39.502 fused_ordering(194) 00:47:39.502 fused_ordering(195) 00:47:39.502 fused_ordering(196) 00:47:39.502 fused_ordering(197) 00:47:39.502 fused_ordering(198) 00:47:39.502 fused_ordering(199) 00:47:39.502 fused_ordering(200) 00:47:39.502 fused_ordering(201) 00:47:39.502 fused_ordering(202) 00:47:39.502 fused_ordering(203) 00:47:39.502 fused_ordering(204) 00:47:39.502 fused_ordering(205) 00:47:39.763 fused_ordering(206) 00:47:39.763 fused_ordering(207) 00:47:39.763 fused_ordering(208) 00:47:39.763 fused_ordering(209) 00:47:39.763 fused_ordering(210) 00:47:39.763 fused_ordering(211) 00:47:39.763 fused_ordering(212) 00:47:39.763 fused_ordering(213) 00:47:39.763 fused_ordering(214) 00:47:39.763 fused_ordering(215) 00:47:39.763 fused_ordering(216) 00:47:39.763 fused_ordering(217) 00:47:39.763 fused_ordering(218) 00:47:39.763 fused_ordering(219) 00:47:39.763 fused_ordering(220) 00:47:39.763 fused_ordering(221) 00:47:39.763 fused_ordering(222) 00:47:39.763 fused_ordering(223) 00:47:39.763 fused_ordering(224) 00:47:39.763 fused_ordering(225) 00:47:39.763 fused_ordering(226) 00:47:39.763 fused_ordering(227) 00:47:39.763 fused_ordering(228) 00:47:39.763 fused_ordering(229) 00:47:39.763 fused_ordering(230) 00:47:39.763 fused_ordering(231) 00:47:39.763 fused_ordering(232) 00:47:39.763 fused_ordering(233) 00:47:39.763 fused_ordering(234) 00:47:39.763 fused_ordering(235) 00:47:39.763 fused_ordering(236) 00:47:39.763 fused_ordering(237) 00:47:39.763 fused_ordering(238) 00:47:39.763 fused_ordering(239) 00:47:39.763 fused_ordering(240) 00:47:39.763 fused_ordering(241) 00:47:39.763 fused_ordering(242) 00:47:39.763 fused_ordering(243) 00:47:39.763 fused_ordering(244) 00:47:39.763 fused_ordering(245) 00:47:39.763 fused_ordering(246) 00:47:39.763 fused_ordering(247) 00:47:39.763 fused_ordering(248) 00:47:39.763 fused_ordering(249) 00:47:39.763 fused_ordering(250) 00:47:39.763 fused_ordering(251) 00:47:39.763 fused_ordering(252) 00:47:39.763 fused_ordering(253) 00:47:39.763 fused_ordering(254) 00:47:39.763 fused_ordering(255) 00:47:39.763 fused_ordering(256) 00:47:39.763 fused_ordering(257) 00:47:39.763 fused_ordering(258) 00:47:39.763 fused_ordering(259) 00:47:39.763 fused_ordering(260) 00:47:39.763 fused_ordering(261) 00:47:39.763 fused_ordering(262) 00:47:39.763 fused_ordering(263) 00:47:39.763 fused_ordering(264) 00:47:39.763 fused_ordering(265) 00:47:39.763 fused_ordering(266) 00:47:39.763 fused_ordering(267) 00:47:39.763 fused_ordering(268) 00:47:39.763 fused_ordering(269) 00:47:39.763 fused_ordering(270) 00:47:39.763 fused_ordering(271) 00:47:39.763 fused_ordering(272) 00:47:39.763 fused_ordering(273) 00:47:39.763 fused_ordering(274) 00:47:39.763 fused_ordering(275) 00:47:39.763 fused_ordering(276) 00:47:39.763 fused_ordering(277) 00:47:39.763 fused_ordering(278) 00:47:39.763 fused_ordering(279) 00:47:39.763 fused_ordering(280) 00:47:39.763 fused_ordering(281) 00:47:39.763 fused_ordering(282) 00:47:39.763 fused_ordering(283) 00:47:39.763 fused_ordering(284) 00:47:39.763 fused_ordering(285) 00:47:39.763 fused_ordering(286) 00:47:39.763 fused_ordering(287) 00:47:39.763 fused_ordering(288) 00:47:39.763 fused_ordering(289) 00:47:39.763 fused_ordering(290) 00:47:39.763 fused_ordering(291) 00:47:39.763 fused_ordering(292) 00:47:39.763 fused_ordering(293) 00:47:39.763 fused_ordering(294) 00:47:39.763 fused_ordering(295) 00:47:39.763 fused_ordering(296) 00:47:39.763 fused_ordering(297) 00:47:39.763 fused_ordering(298) 00:47:39.763 fused_ordering(299) 00:47:39.763 fused_ordering(300) 00:47:39.763 fused_ordering(301) 00:47:39.763 fused_ordering(302) 00:47:39.763 fused_ordering(303) 00:47:39.763 fused_ordering(304) 00:47:39.763 fused_ordering(305) 00:47:39.763 fused_ordering(306) 00:47:39.763 fused_ordering(307) 00:47:39.763 fused_ordering(308) 00:47:39.763 fused_ordering(309) 00:47:39.763 fused_ordering(310) 00:47:39.763 fused_ordering(311) 00:47:39.763 fused_ordering(312) 00:47:39.763 fused_ordering(313) 00:47:39.763 fused_ordering(314) 00:47:39.763 fused_ordering(315) 00:47:39.763 fused_ordering(316) 00:47:39.763 fused_ordering(317) 00:47:39.763 fused_ordering(318) 00:47:39.763 fused_ordering(319) 00:47:39.763 fused_ordering(320) 00:47:39.763 fused_ordering(321) 00:47:39.763 fused_ordering(322) 00:47:39.763 fused_ordering(323) 00:47:39.763 fused_ordering(324) 00:47:39.763 fused_ordering(325) 00:47:39.763 fused_ordering(326) 00:47:39.763 fused_ordering(327) 00:47:39.763 fused_ordering(328) 00:47:39.763 fused_ordering(329) 00:47:39.763 fused_ordering(330) 00:47:39.763 fused_ordering(331) 00:47:39.763 fused_ordering(332) 00:47:39.763 fused_ordering(333) 00:47:39.763 fused_ordering(334) 00:47:39.763 fused_ordering(335) 00:47:39.763 fused_ordering(336) 00:47:39.763 fused_ordering(337) 00:47:39.763 fused_ordering(338) 00:47:39.763 fused_ordering(339) 00:47:39.763 fused_ordering(340) 00:47:39.763 fused_ordering(341) 00:47:39.763 fused_ordering(342) 00:47:39.763 fused_ordering(343) 00:47:39.763 fused_ordering(344) 00:47:39.763 fused_ordering(345) 00:47:39.763 fused_ordering(346) 00:47:39.763 fused_ordering(347) 00:47:39.763 fused_ordering(348) 00:47:39.763 fused_ordering(349) 00:47:39.763 fused_ordering(350) 00:47:39.763 fused_ordering(351) 00:47:39.763 fused_ordering(352) 00:47:39.763 fused_ordering(353) 00:47:39.763 fused_ordering(354) 00:47:39.763 fused_ordering(355) 00:47:39.763 fused_ordering(356) 00:47:39.763 fused_ordering(357) 00:47:39.763 fused_ordering(358) 00:47:39.763 fused_ordering(359) 00:47:39.763 fused_ordering(360) 00:47:39.763 fused_ordering(361) 00:47:39.763 fused_ordering(362) 00:47:39.763 fused_ordering(363) 00:47:39.763 fused_ordering(364) 00:47:39.763 fused_ordering(365) 00:47:39.763 fused_ordering(366) 00:47:39.763 fused_ordering(367) 00:47:39.763 fused_ordering(368) 00:47:39.763 fused_ordering(369) 00:47:39.763 fused_ordering(370) 00:47:39.763 fused_ordering(371) 00:47:39.763 fused_ordering(372) 00:47:39.763 fused_ordering(373) 00:47:39.763 fused_ordering(374) 00:47:39.763 fused_ordering(375) 00:47:39.763 fused_ordering(376) 00:47:39.763 fused_ordering(377) 00:47:39.763 fused_ordering(378) 00:47:39.763 fused_ordering(379) 00:47:39.763 fused_ordering(380) 00:47:39.763 fused_ordering(381) 00:47:39.763 fused_ordering(382) 00:47:39.763 fused_ordering(383) 00:47:39.763 fused_ordering(384) 00:47:39.763 fused_ordering(385) 00:47:39.763 fused_ordering(386) 00:47:39.763 fused_ordering(387) 00:47:39.763 fused_ordering(388) 00:47:39.763 fused_ordering(389) 00:47:39.763 fused_ordering(390) 00:47:39.763 fused_ordering(391) 00:47:39.763 fused_ordering(392) 00:47:39.763 fused_ordering(393) 00:47:39.763 fused_ordering(394) 00:47:39.763 fused_ordering(395) 00:47:39.763 fused_ordering(396) 00:47:39.763 fused_ordering(397) 00:47:39.763 fused_ordering(398) 00:47:39.763 fused_ordering(399) 00:47:39.763 fused_ordering(400) 00:47:39.763 fused_ordering(401) 00:47:39.763 fused_ordering(402) 00:47:39.763 fused_ordering(403) 00:47:39.763 fused_ordering(404) 00:47:39.763 fused_ordering(405) 00:47:39.763 fused_ordering(406) 00:47:39.763 fused_ordering(407) 00:47:39.763 fused_ordering(408) 00:47:39.763 fused_ordering(409) 00:47:39.763 fused_ordering(410) 00:47:40.332 fused_ordering(411) 00:47:40.332 fused_ordering(412) 00:47:40.332 fused_ordering(413) 00:47:40.332 fused_ordering(414) 00:47:40.332 fused_ordering(415) 00:47:40.332 fused_ordering(416) 00:47:40.332 fused_ordering(417) 00:47:40.332 fused_ordering(418) 00:47:40.332 fused_ordering(419) 00:47:40.332 fused_ordering(420) 00:47:40.332 fused_ordering(421) 00:47:40.332 fused_ordering(422) 00:47:40.332 fused_ordering(423) 00:47:40.332 fused_ordering(424) 00:47:40.332 fused_ordering(425) 00:47:40.332 fused_ordering(426) 00:47:40.332 fused_ordering(427) 00:47:40.332 fused_ordering(428) 00:47:40.332 fused_ordering(429) 00:47:40.332 fused_ordering(430) 00:47:40.332 fused_ordering(431) 00:47:40.332 fused_ordering(432) 00:47:40.332 fused_ordering(433) 00:47:40.332 fused_ordering(434) 00:47:40.332 fused_ordering(435) 00:47:40.332 fused_ordering(436) 00:47:40.332 fused_ordering(437) 00:47:40.332 fused_ordering(438) 00:47:40.332 fused_ordering(439) 00:47:40.332 fused_ordering(440) 00:47:40.332 fused_ordering(441) 00:47:40.332 fused_ordering(442) 00:47:40.332 fused_ordering(443) 00:47:40.332 fused_ordering(444) 00:47:40.332 fused_ordering(445) 00:47:40.332 fused_ordering(446) 00:47:40.332 fused_ordering(447) 00:47:40.332 fused_ordering(448) 00:47:40.332 fused_ordering(449) 00:47:40.332 fused_ordering(450) 00:47:40.332 fused_ordering(451) 00:47:40.332 fused_ordering(452) 00:47:40.332 fused_ordering(453) 00:47:40.332 fused_ordering(454) 00:47:40.332 fused_ordering(455) 00:47:40.332 fused_ordering(456) 00:47:40.332 fused_ordering(457) 00:47:40.332 fused_ordering(458) 00:47:40.332 fused_ordering(459) 00:47:40.332 fused_ordering(460) 00:47:40.332 fused_ordering(461) 00:47:40.332 fused_ordering(462) 00:47:40.332 fused_ordering(463) 00:47:40.332 fused_ordering(464) 00:47:40.332 fused_ordering(465) 00:47:40.332 fused_ordering(466) 00:47:40.332 fused_ordering(467) 00:47:40.332 fused_ordering(468) 00:47:40.332 fused_ordering(469) 00:47:40.332 fused_ordering(470) 00:47:40.332 fused_ordering(471) 00:47:40.332 fused_ordering(472) 00:47:40.332 fused_ordering(473) 00:47:40.332 fused_ordering(474) 00:47:40.332 fused_ordering(475) 00:47:40.332 fused_ordering(476) 00:47:40.332 fused_ordering(477) 00:47:40.332 fused_ordering(478) 00:47:40.332 fused_ordering(479) 00:47:40.332 fused_ordering(480) 00:47:40.332 fused_ordering(481) 00:47:40.332 fused_ordering(482) 00:47:40.332 fused_ordering(483) 00:47:40.332 fused_ordering(484) 00:47:40.332 fused_ordering(485) 00:47:40.332 fused_ordering(486) 00:47:40.332 fused_ordering(487) 00:47:40.332 fused_ordering(488) 00:47:40.332 fused_ordering(489) 00:47:40.332 fused_ordering(490) 00:47:40.332 fused_ordering(491) 00:47:40.332 fused_ordering(492) 00:47:40.332 fused_ordering(493) 00:47:40.332 fused_ordering(494) 00:47:40.332 fused_ordering(495) 00:47:40.332 fused_ordering(496) 00:47:40.332 fused_ordering(497) 00:47:40.332 fused_ordering(498) 00:47:40.332 fused_ordering(499) 00:47:40.332 fused_ordering(500) 00:47:40.332 fused_ordering(501) 00:47:40.332 fused_ordering(502) 00:47:40.332 fused_ordering(503) 00:47:40.332 fused_ordering(504) 00:47:40.332 fused_ordering(505) 00:47:40.332 fused_ordering(506) 00:47:40.332 fused_ordering(507) 00:47:40.332 fused_ordering(508) 00:47:40.332 fused_ordering(509) 00:47:40.332 fused_ordering(510) 00:47:40.332 fused_ordering(511) 00:47:40.332 fused_ordering(512) 00:47:40.332 fused_ordering(513) 00:47:40.332 fused_ordering(514) 00:47:40.332 fused_ordering(515) 00:47:40.332 fused_ordering(516) 00:47:40.332 fused_ordering(517) 00:47:40.332 fused_ordering(518) 00:47:40.332 fused_ordering(519) 00:47:40.332 fused_ordering(520) 00:47:40.332 fused_ordering(521) 00:47:40.332 fused_ordering(522) 00:47:40.332 fused_ordering(523) 00:47:40.332 fused_ordering(524) 00:47:40.332 fused_ordering(525) 00:47:40.332 fused_ordering(526) 00:47:40.332 fused_ordering(527) 00:47:40.332 fused_ordering(528) 00:47:40.332 fused_ordering(529) 00:47:40.332 fused_ordering(530) 00:47:40.332 fused_ordering(531) 00:47:40.332 fused_ordering(532) 00:47:40.332 fused_ordering(533) 00:47:40.332 fused_ordering(534) 00:47:40.332 fused_ordering(535) 00:47:40.332 fused_ordering(536) 00:47:40.332 fused_ordering(537) 00:47:40.332 fused_ordering(538) 00:47:40.332 fused_ordering(539) 00:47:40.332 fused_ordering(540) 00:47:40.332 fused_ordering(541) 00:47:40.332 fused_ordering(542) 00:47:40.332 fused_ordering(543) 00:47:40.332 fused_ordering(544) 00:47:40.332 fused_ordering(545) 00:47:40.332 fused_ordering(546) 00:47:40.332 fused_ordering(547) 00:47:40.332 fused_ordering(548) 00:47:40.332 fused_ordering(549) 00:47:40.332 fused_ordering(550) 00:47:40.332 fused_ordering(551) 00:47:40.332 fused_ordering(552) 00:47:40.332 fused_ordering(553) 00:47:40.332 fused_ordering(554) 00:47:40.332 fused_ordering(555) 00:47:40.332 fused_ordering(556) 00:47:40.332 fused_ordering(557) 00:47:40.332 fused_ordering(558) 00:47:40.332 fused_ordering(559) 00:47:40.332 fused_ordering(560) 00:47:40.332 fused_ordering(561) 00:47:40.332 fused_ordering(562) 00:47:40.332 fused_ordering(563) 00:47:40.332 fused_ordering(564) 00:47:40.332 fused_ordering(565) 00:47:40.332 fused_ordering(566) 00:47:40.332 fused_ordering(567) 00:47:40.332 fused_ordering(568) 00:47:40.332 fused_ordering(569) 00:47:40.332 fused_ordering(570) 00:47:40.332 fused_ordering(571) 00:47:40.332 fused_ordering(572) 00:47:40.332 fused_ordering(573) 00:47:40.332 fused_ordering(574) 00:47:40.332 fused_ordering(575) 00:47:40.332 fused_ordering(576) 00:47:40.332 fused_ordering(577) 00:47:40.332 fused_ordering(578) 00:47:40.332 fused_ordering(579) 00:47:40.332 fused_ordering(580) 00:47:40.332 fused_ordering(581) 00:47:40.332 fused_ordering(582) 00:47:40.332 fused_ordering(583) 00:47:40.332 fused_ordering(584) 00:47:40.332 fused_ordering(585) 00:47:40.332 fused_ordering(586) 00:47:40.332 fused_ordering(587) 00:47:40.332 fused_ordering(588) 00:47:40.332 fused_ordering(589) 00:47:40.332 fused_ordering(590) 00:47:40.332 fused_ordering(591) 00:47:40.332 fused_ordering(592) 00:47:40.332 fused_ordering(593) 00:47:40.332 fused_ordering(594) 00:47:40.332 fused_ordering(595) 00:47:40.332 fused_ordering(596) 00:47:40.332 fused_ordering(597) 00:47:40.332 fused_ordering(598) 00:47:40.332 fused_ordering(599) 00:47:40.332 fused_ordering(600) 00:47:40.332 fused_ordering(601) 00:47:40.332 fused_ordering(602) 00:47:40.332 fused_ordering(603) 00:47:40.332 fused_ordering(604) 00:47:40.332 fused_ordering(605) 00:47:40.332 fused_ordering(606) 00:47:40.332 fused_ordering(607) 00:47:40.332 fused_ordering(608) 00:47:40.332 fused_ordering(609) 00:47:40.332 fused_ordering(610) 00:47:40.332 fused_ordering(611) 00:47:40.332 fused_ordering(612) 00:47:40.332 fused_ordering(613) 00:47:40.332 fused_ordering(614) 00:47:40.332 fused_ordering(615) 00:47:40.902 fused_ordering(616) 00:47:40.902 fused_ordering(617) 00:47:40.902 fused_ordering(618) 00:47:40.902 fused_ordering(619) 00:47:40.902 fused_ordering(620) 00:47:40.902 fused_ordering(621) 00:47:40.902 fused_ordering(622) 00:47:40.902 fused_ordering(623) 00:47:40.902 fused_ordering(624) 00:47:40.902 fused_ordering(625) 00:47:40.902 fused_ordering(626) 00:47:40.902 fused_ordering(627) 00:47:40.902 fused_ordering(628) 00:47:40.902 fused_ordering(629) 00:47:40.902 fused_ordering(630) 00:47:40.902 fused_ordering(631) 00:47:40.902 fused_ordering(632) 00:47:40.902 fused_ordering(633) 00:47:40.902 fused_ordering(634) 00:47:40.902 fused_ordering(635) 00:47:40.902 fused_ordering(636) 00:47:40.902 fused_ordering(637) 00:47:40.902 fused_ordering(638) 00:47:40.902 fused_ordering(639) 00:47:40.902 fused_ordering(640) 00:47:40.902 fused_ordering(641) 00:47:40.902 fused_ordering(642) 00:47:40.902 fused_ordering(643) 00:47:40.902 fused_ordering(644) 00:47:40.902 fused_ordering(645) 00:47:40.902 fused_ordering(646) 00:47:40.902 fused_ordering(647) 00:47:40.902 fused_ordering(648) 00:47:40.902 fused_ordering(649) 00:47:40.902 fused_ordering(650) 00:47:40.902 fused_ordering(651) 00:47:40.902 fused_ordering(652) 00:47:40.902 fused_ordering(653) 00:47:40.902 fused_ordering(654) 00:47:40.902 fused_ordering(655) 00:47:40.902 fused_ordering(656) 00:47:40.902 fused_ordering(657) 00:47:40.902 fused_ordering(658) 00:47:40.902 fused_ordering(659) 00:47:40.902 fused_ordering(660) 00:47:40.902 fused_ordering(661) 00:47:40.902 fused_ordering(662) 00:47:40.902 fused_ordering(663) 00:47:40.902 fused_ordering(664) 00:47:40.902 fused_ordering(665) 00:47:40.902 fused_ordering(666) 00:47:40.902 fused_ordering(667) 00:47:40.903 fused_ordering(668) 00:47:40.903 fused_ordering(669) 00:47:40.903 fused_ordering(670) 00:47:40.903 fused_ordering(671) 00:47:40.903 fused_ordering(672) 00:47:40.903 fused_ordering(673) 00:47:40.903 fused_ordering(674) 00:47:40.903 fused_ordering(675) 00:47:40.903 fused_ordering(676) 00:47:40.903 fused_ordering(677) 00:47:40.903 fused_ordering(678) 00:47:40.903 fused_ordering(679) 00:47:40.903 fused_ordering(680) 00:47:40.903 fused_ordering(681) 00:47:40.903 fused_ordering(682) 00:47:40.903 fused_ordering(683) 00:47:40.903 fused_ordering(684) 00:47:40.903 fused_ordering(685) 00:47:40.903 fused_ordering(686) 00:47:40.903 fused_ordering(687) 00:47:40.903 fused_ordering(688) 00:47:40.903 fused_ordering(689) 00:47:40.903 fused_ordering(690) 00:47:40.903 fused_ordering(691) 00:47:40.903 fused_ordering(692) 00:47:40.903 fused_ordering(693) 00:47:40.903 fused_ordering(694) 00:47:40.903 fused_ordering(695) 00:47:40.903 fused_ordering(696) 00:47:40.903 fused_ordering(697) 00:47:40.903 fused_ordering(698) 00:47:40.903 fused_ordering(699) 00:47:40.903 fused_ordering(700) 00:47:40.903 fused_ordering(701) 00:47:40.903 fused_ordering(702) 00:47:40.903 fused_ordering(703) 00:47:40.903 fused_ordering(704) 00:47:40.903 fused_ordering(705) 00:47:40.903 fused_ordering(706) 00:47:40.903 fused_ordering(707) 00:47:40.903 fused_ordering(708) 00:47:40.903 fused_ordering(709) 00:47:40.903 fused_ordering(710) 00:47:40.903 fused_ordering(711) 00:47:40.903 fused_ordering(712) 00:47:40.903 fused_ordering(713) 00:47:40.903 fused_ordering(714) 00:47:40.903 fused_ordering(715) 00:47:40.903 fused_ordering(716) 00:47:40.903 fused_ordering(717) 00:47:40.903 fused_ordering(718) 00:47:40.903 fused_ordering(719) 00:47:40.903 fused_ordering(720) 00:47:40.903 fused_ordering(721) 00:47:40.903 fused_ordering(722) 00:47:40.903 fused_ordering(723) 00:47:40.903 fused_ordering(724) 00:47:40.903 fused_ordering(725) 00:47:40.903 fused_ordering(726) 00:47:40.903 fused_ordering(727) 00:47:40.903 fused_ordering(728) 00:47:40.903 fused_ordering(729) 00:47:40.903 fused_ordering(730) 00:47:40.903 fused_ordering(731) 00:47:40.903 fused_ordering(732) 00:47:40.903 fused_ordering(733) 00:47:40.903 fused_ordering(734) 00:47:40.903 fused_ordering(735) 00:47:40.903 fused_ordering(736) 00:47:40.903 fused_ordering(737) 00:47:40.903 fused_ordering(738) 00:47:40.903 fused_ordering(739) 00:47:40.903 fused_ordering(740) 00:47:40.903 fused_ordering(741) 00:47:40.903 fused_ordering(742) 00:47:40.903 fused_ordering(743) 00:47:40.903 fused_ordering(744) 00:47:40.903 fused_ordering(745) 00:47:40.903 fused_ordering(746) 00:47:40.903 fused_ordering(747) 00:47:40.903 fused_ordering(748) 00:47:40.903 fused_ordering(749) 00:47:40.903 fused_ordering(750) 00:47:40.903 fused_ordering(751) 00:47:40.903 fused_ordering(752) 00:47:40.903 fused_ordering(753) 00:47:40.903 fused_ordering(754) 00:47:40.903 fused_ordering(755) 00:47:40.903 fused_ordering(756) 00:47:40.903 fused_ordering(757) 00:47:40.903 fused_ordering(758) 00:47:40.903 fused_ordering(759) 00:47:40.903 fused_ordering(760) 00:47:40.903 fused_ordering(761) 00:47:40.903 fused_ordering(762) 00:47:40.903 fused_ordering(763) 00:47:40.903 fused_ordering(764) 00:47:40.903 fused_ordering(765) 00:47:40.903 fused_ordering(766) 00:47:40.903 fused_ordering(767) 00:47:40.903 fused_ordering(768) 00:47:40.903 fused_ordering(769) 00:47:40.903 fused_ordering(770) 00:47:40.903 fused_ordering(771) 00:47:40.903 fused_ordering(772) 00:47:40.903 fused_ordering(773) 00:47:40.903 fused_ordering(774) 00:47:40.903 fused_ordering(775) 00:47:40.903 fused_ordering(776) 00:47:40.903 fused_ordering(777) 00:47:40.903 fused_ordering(778) 00:47:40.903 fused_ordering(779) 00:47:40.903 fused_ordering(780) 00:47:40.903 fused_ordering(781) 00:47:40.903 fused_ordering(782) 00:47:40.903 fused_ordering(783) 00:47:40.903 fused_ordering(784) 00:47:40.903 fused_ordering(785) 00:47:40.903 fused_ordering(786) 00:47:40.903 fused_ordering(787) 00:47:40.903 fused_ordering(788) 00:47:40.903 fused_ordering(789) 00:47:40.903 fused_ordering(790) 00:47:40.903 fused_ordering(791) 00:47:40.903 fused_ordering(792) 00:47:40.903 fused_ordering(793) 00:47:40.903 fused_ordering(794) 00:47:40.903 fused_ordering(795) 00:47:40.903 fused_ordering(796) 00:47:40.903 fused_ordering(797) 00:47:40.903 fused_ordering(798) 00:47:40.903 fused_ordering(799) 00:47:40.903 fused_ordering(800) 00:47:40.903 fused_ordering(801) 00:47:40.903 fused_ordering(802) 00:47:40.903 fused_ordering(803) 00:47:40.903 fused_ordering(804) 00:47:40.903 fused_ordering(805) 00:47:40.903 fused_ordering(806) 00:47:40.903 fused_ordering(807) 00:47:40.903 fused_ordering(808) 00:47:40.903 fused_ordering(809) 00:47:40.903 fused_ordering(810) 00:47:40.903 fused_ordering(811) 00:47:40.903 fused_ordering(812) 00:47:40.903 fused_ordering(813) 00:47:40.903 fused_ordering(814) 00:47:40.903 fused_ordering(815) 00:47:40.903 fused_ordering(816) 00:47:40.903 fused_ordering(817) 00:47:40.903 fused_ordering(818) 00:47:40.903 fused_ordering(819) 00:47:40.903 fused_ordering(820) 00:47:41.474 fused_ordering(821) 00:47:41.474 fused_ordering(822) 00:47:41.474 fused_ordering(823) 00:47:41.474 fused_ordering(824) 00:47:41.474 fused_ordering(825) 00:47:41.474 fused_ordering(826) 00:47:41.474 fused_ordering(827) 00:47:41.474 fused_ordering(828) 00:47:41.474 fused_ordering(829) 00:47:41.474 fused_ordering(830) 00:47:41.474 fused_ordering(831) 00:47:41.474 fused_ordering(832) 00:47:41.474 fused_ordering(833) 00:47:41.474 fused_ordering(834) 00:47:41.474 fused_ordering(835) 00:47:41.474 fused_ordering(836) 00:47:41.474 fused_ordering(837) 00:47:41.474 fused_ordering(838) 00:47:41.474 fused_ordering(839) 00:47:41.474 fused_ordering(840) 00:47:41.474 fused_ordering(841) 00:47:41.474 fused_ordering(842) 00:47:41.474 fused_ordering(843) 00:47:41.474 fused_ordering(844) 00:47:41.474 fused_ordering(845) 00:47:41.474 fused_ordering(846) 00:47:41.474 fused_ordering(847) 00:47:41.474 fused_ordering(848) 00:47:41.474 fused_ordering(849) 00:47:41.474 fused_ordering(850) 00:47:41.474 fused_ordering(851) 00:47:41.474 fused_ordering(852) 00:47:41.474 fused_ordering(853) 00:47:41.474 fused_ordering(854) 00:47:41.474 fused_ordering(855) 00:47:41.474 fused_ordering(856) 00:47:41.474 fused_ordering(857) 00:47:41.474 fused_ordering(858) 00:47:41.474 fused_ordering(859) 00:47:41.474 fused_ordering(860) 00:47:41.474 fused_ordering(861) 00:47:41.474 fused_ordering(862) 00:47:41.474 fused_ordering(863) 00:47:41.474 fused_ordering(864) 00:47:41.474 fused_ordering(865) 00:47:41.474 fused_ordering(866) 00:47:41.474 fused_ordering(867) 00:47:41.474 fused_ordering(868) 00:47:41.474 fused_ordering(869) 00:47:41.474 fused_ordering(870) 00:47:41.474 fused_ordering(871) 00:47:41.474 fused_ordering(872) 00:47:41.474 fused_ordering(873) 00:47:41.474 fused_ordering(874) 00:47:41.474 fused_ordering(875) 00:47:41.474 fused_ordering(876) 00:47:41.474 fused_ordering(877) 00:47:41.474 fused_ordering(878) 00:47:41.474 fused_ordering(879) 00:47:41.474 fused_ordering(880) 00:47:41.474 fused_ordering(881) 00:47:41.474 fused_ordering(882) 00:47:41.474 fused_ordering(883) 00:47:41.474 fused_ordering(884) 00:47:41.474 fused_ordering(885) 00:47:41.474 fused_ordering(886) 00:47:41.474 fused_ordering(887) 00:47:41.474 fused_ordering(888) 00:47:41.474 fused_ordering(889) 00:47:41.474 fused_ordering(890) 00:47:41.474 fused_ordering(891) 00:47:41.474 fused_ordering(892) 00:47:41.474 fused_ordering(893) 00:47:41.474 fused_ordering(894) 00:47:41.474 fused_ordering(895) 00:47:41.474 fused_ordering(896) 00:47:41.474 fused_ordering(897) 00:47:41.474 fused_ordering(898) 00:47:41.474 fused_ordering(899) 00:47:41.474 fused_ordering(900) 00:47:41.474 fused_ordering(901) 00:47:41.474 fused_ordering(902) 00:47:41.474 fused_ordering(903) 00:47:41.474 fused_ordering(904) 00:47:41.474 fused_ordering(905) 00:47:41.474 fused_ordering(906) 00:47:41.474 fused_ordering(907) 00:47:41.474 fused_ordering(908) 00:47:41.474 fused_ordering(909) 00:47:41.474 fused_ordering(910) 00:47:41.474 fused_ordering(911) 00:47:41.474 fused_ordering(912) 00:47:41.474 fused_ordering(913) 00:47:41.474 fused_ordering(914) 00:47:41.474 fused_ordering(915) 00:47:41.474 fused_ordering(916) 00:47:41.474 fused_ordering(917) 00:47:41.474 fused_ordering(918) 00:47:41.474 fused_ordering(919) 00:47:41.474 fused_ordering(920) 00:47:41.474 fused_ordering(921) 00:47:41.474 fused_ordering(922) 00:47:41.474 fused_ordering(923) 00:47:41.474 fused_ordering(924) 00:47:41.474 fused_ordering(925) 00:47:41.474 fused_ordering(926) 00:47:41.474 fused_ordering(927) 00:47:41.474 fused_ordering(928) 00:47:41.474 fused_ordering(929) 00:47:41.474 fused_ordering(930) 00:47:41.474 fused_ordering(931) 00:47:41.474 fused_ordering(932) 00:47:41.474 fused_ordering(933) 00:47:41.474 fused_ordering(934) 00:47:41.474 fused_ordering(935) 00:47:41.474 fused_ordering(936) 00:47:41.474 fused_ordering(937) 00:47:41.474 fused_ordering(938) 00:47:41.474 fused_ordering(939) 00:47:41.474 fused_ordering(940) 00:47:41.474 fused_ordering(941) 00:47:41.474 fused_ordering(942) 00:47:41.474 fused_ordering(943) 00:47:41.474 fused_ordering(944) 00:47:41.474 fused_ordering(945) 00:47:41.474 fused_ordering(946) 00:47:41.474 fused_ordering(947) 00:47:41.474 fused_ordering(948) 00:47:41.474 fused_ordering(949) 00:47:41.474 fused_ordering(950) 00:47:41.474 fused_ordering(951) 00:47:41.474 fused_ordering(952) 00:47:41.474 fused_ordering(953) 00:47:41.474 fused_ordering(954) 00:47:41.474 fused_ordering(955) 00:47:41.474 fused_ordering(956) 00:47:41.474 fused_ordering(957) 00:47:41.474 fused_ordering(958) 00:47:41.474 fused_ordering(959) 00:47:41.474 fused_ordering(960) 00:47:41.474 fused_ordering(961) 00:47:41.474 fused_ordering(962) 00:47:41.474 fused_ordering(963) 00:47:41.474 fused_ordering(964) 00:47:41.474 fused_ordering(965) 00:47:41.474 fused_ordering(966) 00:47:41.474 fused_ordering(967) 00:47:41.474 fused_ordering(968) 00:47:41.474 fused_ordering(969) 00:47:41.474 fused_ordering(970) 00:47:41.474 fused_ordering(971) 00:47:41.474 fused_ordering(972) 00:47:41.474 fused_ordering(973) 00:47:41.474 fused_ordering(974) 00:47:41.474 fused_ordering(975) 00:47:41.474 fused_ordering(976) 00:47:41.474 fused_ordering(977) 00:47:41.474 fused_ordering(978) 00:47:41.474 fused_ordering(979) 00:47:41.474 fused_ordering(980) 00:47:41.474 fused_ordering(981) 00:47:41.474 fused_ordering(982) 00:47:41.474 fused_ordering(983) 00:47:41.474 fused_ordering(984) 00:47:41.474 fused_ordering(985) 00:47:41.474 fused_ordering(986) 00:47:41.474 fused_ordering(987) 00:47:41.474 fused_ordering(988) 00:47:41.474 fused_ordering(989) 00:47:41.474 fused_ordering(990) 00:47:41.474 fused_ordering(991) 00:47:41.474 fused_ordering(992) 00:47:41.474 fused_ordering(993) 00:47:41.474 fused_ordering(994) 00:47:41.474 fused_ordering(995) 00:47:41.474 fused_ordering(996) 00:47:41.474 fused_ordering(997) 00:47:41.474 fused_ordering(998) 00:47:41.474 fused_ordering(999) 00:47:41.474 fused_ordering(1000) 00:47:41.474 fused_ordering(1001) 00:47:41.474 fused_ordering(1002) 00:47:41.474 fused_ordering(1003) 00:47:41.474 fused_ordering(1004) 00:47:41.475 fused_ordering(1005) 00:47:41.475 fused_ordering(1006) 00:47:41.475 fused_ordering(1007) 00:47:41.475 fused_ordering(1008) 00:47:41.475 fused_ordering(1009) 00:47:41.475 fused_ordering(1010) 00:47:41.475 fused_ordering(1011) 00:47:41.475 fused_ordering(1012) 00:47:41.475 fused_ordering(1013) 00:47:41.475 fused_ordering(1014) 00:47:41.475 fused_ordering(1015) 00:47:41.475 fused_ordering(1016) 00:47:41.475 fused_ordering(1017) 00:47:41.475 fused_ordering(1018) 00:47:41.475 fused_ordering(1019) 00:47:41.475 fused_ordering(1020) 00:47:41.475 fused_ordering(1021) 00:47:41.475 fused_ordering(1022) 00:47:41.475 fused_ordering(1023) 00:47:41.475 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:47:41.475 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:47:41.475 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:41.475 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:47:41.475 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:41.475 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:47:41.475 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:41.475 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:41.475 rmmod nvme_tcp 00:47:41.734 rmmod nvme_fabrics 00:47:41.734 rmmod nvme_keyring 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2364603 ']' 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2364603 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2364603 ']' 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2364603 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2364603 00:47:41.734 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:41.735 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:41.735 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2364603' 00:47:41.735 killing process with pid 2364603 00:47:41.735 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2364603 00:47:41.735 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2364603 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:41.995 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:43.906 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:43.907 00:47:43.907 real 0m12.077s 00:47:43.907 user 0m7.225s 00:47:43.907 sys 0m6.186s 00:47:43.907 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:43.907 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:47:43.907 ************************************ 00:47:43.907 END TEST nvmf_fused_ordering 00:47:43.907 ************************************ 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:44.167 ************************************ 00:47:44.167 START TEST nvmf_ns_masking 00:47:44.167 ************************************ 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:47:44.167 * Looking for test storage... 00:47:44.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:44.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.167 --rc genhtml_branch_coverage=1 00:47:44.167 --rc genhtml_function_coverage=1 00:47:44.167 --rc genhtml_legend=1 00:47:44.167 --rc geninfo_all_blocks=1 00:47:44.167 --rc geninfo_unexecuted_blocks=1 00:47:44.167 00:47:44.167 ' 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:44.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.167 --rc genhtml_branch_coverage=1 00:47:44.167 --rc genhtml_function_coverage=1 00:47:44.167 --rc genhtml_legend=1 00:47:44.167 --rc geninfo_all_blocks=1 00:47:44.167 --rc geninfo_unexecuted_blocks=1 00:47:44.167 00:47:44.167 ' 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:44.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.167 --rc genhtml_branch_coverage=1 00:47:44.167 --rc genhtml_function_coverage=1 00:47:44.167 --rc genhtml_legend=1 00:47:44.167 --rc geninfo_all_blocks=1 00:47:44.167 --rc geninfo_unexecuted_blocks=1 00:47:44.167 00:47:44.167 ' 00:47:44.167 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:44.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.167 --rc genhtml_branch_coverage=1 00:47:44.167 --rc genhtml_function_coverage=1 00:47:44.167 --rc genhtml_legend=1 00:47:44.167 --rc geninfo_all_blocks=1 00:47:44.168 --rc geninfo_unexecuted_blocks=1 00:47:44.168 00:47:44.168 ' 00:47:44.168 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:44.168 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:44.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4bfcb507-c881-48b5-8d94-746a7b20e6fc 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3c595b94-fee1-478c-9342-6e2242ebb63f 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ab1fd87e-e76a-4b47-8b4f-5b0ae7883c16 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:47:44.429 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:47:51.013 Found 0000:af:00.0 (0x8086 - 0x159b) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:47:51.013 Found 0000:af:00.1 (0x8086 - 0x159b) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:47:51.013 Found net devices under 0000:af:00.0: cvl_0_0 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:47:51.013 Found net devices under 0000:af:00.1: cvl_0_1 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:51.013 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:51.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:51.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:47:51.014 00:47:51.014 --- 10.0.0.2 ping statistics --- 00:47:51.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:51.014 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:51.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:51.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:47:51.014 00:47:51.014 --- 10.0.0.1 ping statistics --- 00:47:51.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:51.014 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2368266 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2368266 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2368266 ']' 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:51.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:51.014 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:47:51.014 [2024-12-09 10:58:51.849336] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:47:51.014 [2024-12-09 10:58:51.849413] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:51.014 [2024-12-09 10:58:51.982397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:51.014 [2024-12-09 10:58:52.036393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:51.014 [2024-12-09 10:58:52.036439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:51.014 [2024-12-09 10:58:52.036454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:51.014 [2024-12-09 10:58:52.036468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:51.014 [2024-12-09 10:58:52.036479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:51.014 [2024-12-09 10:58:52.037104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:51.014 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:51.014 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:47:51.014 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:51.014 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:51.014 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:47:51.274 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:51.274 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:47:51.534 [2024-12-09 10:58:52.461585] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:51.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:47:51.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:47:51.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:47:51.794 Malloc1 00:47:51.794 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:47:52.054 Malloc2 00:47:52.054 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:47:52.314 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:47:52.574 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:52.835 [2024-12-09 10:58:53.868900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:52.835 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:47:52.835 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ab1fd87e-e76a-4b47-8b4f-5b0ae7883c16 -a 10.0.0.2 -s 4420 -i 4 00:47:53.094 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:47:53.094 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:47:53.094 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:47:53.094 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:47:53.094 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:47:55.007 [ 0]:0x1 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6e0ee7ac68df486ab68739d5e7250a95 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6e0ee7ac68df486ab68739d5e7250a95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:47:55.007 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:47:55.267 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:47:55.267 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:47:55.267 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:47:55.267 [ 0]:0x1 00:47:55.267 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:47:55.267 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:47:55.527 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6e0ee7ac68df486ab68739d5e7250a95 00:47:55.527 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6e0ee7ac68df486ab68739d5e7250a95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:47:55.527 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:47:55.527 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:47:55.527 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:47:55.527 [ 1]:0x2 00:47:55.527 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:47:55.527 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:47:55.527 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d358f5048ba4f39b574daed82ecbe52 00:47:55.527 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d358f5048ba4f39b574daed82ecbe52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:47:55.527 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:47:55.528 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:55.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:55.787 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:47:56.048 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:47:56.308 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:47:56.308 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ab1fd87e-e76a-4b47-8b4f-5b0ae7883c16 -a 10.0.0.2 -s 4420 -i 4 00:47:56.568 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:47:56.568 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:47:56.568 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:47:56.568 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:47:56.568 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:47:56.568 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:58.483 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:47:58.744 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:58.744 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:47:58.744 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:47:58.744 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:47:58.744 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:47:58.744 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:47:58.745 [ 0]:0x2 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d358f5048ba4f39b574daed82ecbe52 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d358f5048ba4f39b574daed82ecbe52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:47:58.745 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:47:59.005 [ 0]:0x1 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6e0ee7ac68df486ab68739d5e7250a95 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6e0ee7ac68df486ab68739d5e7250a95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:47:59.005 [ 1]:0x2 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d358f5048ba4f39b574daed82ecbe52 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d358f5048ba4f39b574daed82ecbe52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:47:59.005 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:47:59.265 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:47:59.266 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:47:59.526 [ 0]:0x2 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d358f5048ba4f39b574daed82ecbe52 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d358f5048ba4f39b574daed82ecbe52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:59.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:59.526 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:47:59.786 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:47:59.786 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ab1fd87e-e76a-4b47-8b4f-5b0ae7883c16 -a 10.0.0.2 -s 4420 -i 4 00:48:00.046 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:48:00.046 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:48:00.046 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:48:00.046 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:48:00.046 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:48:00.046 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:48:01.956 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:48:01.956 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:48:01.956 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:48:01.956 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:48:01.956 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:48:01.956 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:48:01.956 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:48:01.956 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:02.216 [ 0]:0x1 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6e0ee7ac68df486ab68739d5e7250a95 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6e0ee7ac68df486ab68739d5e7250a95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:02.216 [ 1]:0x2 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d358f5048ba4f39b574daed82ecbe52 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d358f5048ba4f39b574daed82ecbe52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:02.216 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:48:02.788 [ 0]:0x2 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d358f5048ba4f39b574daed82ecbe52 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d358f5048ba4f39b574daed82ecbe52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:48:02.788 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:48:03.049 [2024-12-09 10:59:04.039893] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:48:03.049 request: 00:48:03.049 { 00:48:03.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:03.049 "nsid": 2, 00:48:03.049 "host": "nqn.2016-06.io.spdk:host1", 00:48:03.049 "method": "nvmf_ns_remove_host", 00:48:03.049 "req_id": 1 00:48:03.049 } 00:48:03.049 Got JSON-RPC error response 00:48:03.049 response: 00:48:03.049 { 00:48:03.049 "code": -32602, 00:48:03.049 "message": "Invalid parameters" 00:48:03.049 } 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:48:03.049 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:03.050 [ 0]:0x2 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d358f5048ba4f39b574daed82ecbe52 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d358f5048ba4f39b574daed82ecbe52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:48:03.050 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:48:03.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:48:03.309 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2370075 00:48:03.309 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:48:03.309 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2370075 /var/tmp/host.sock 00:48:03.309 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:48:03.309 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2370075 ']' 00:48:03.309 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:48:03.309 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:03.310 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:48:03.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:48:03.310 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:03.310 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:48:03.310 [2024-12-09 10:59:04.396034] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:48:03.310 [2024-12-09 10:59:04.396117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370075 ] 00:48:03.569 [2024-12-09 10:59:04.493778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:03.569 [2024-12-09 10:59:04.536511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:03.829 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:03.829 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:48:03.829 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:48:04.096 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:48:04.356 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4bfcb507-c881-48b5-8d94-746a7b20e6fc 00:48:04.356 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:48:04.356 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4BFCB507C88148B58D94746A7B20E6FC -i 00:48:04.615 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3c595b94-fee1-478c-9342-6e2242ebb63f 00:48:04.615 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:48:04.615 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3C595B94FEE1478C93426E2242EBB63F -i 00:48:04.875 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:48:05.135 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:48:05.394 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:48:05.394 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:48:05.654 nvme0n1 00:48:05.918 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:48:05.918 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:48:06.178 nvme1n2 00:48:06.178 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:48:06.178 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:48:06.178 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:48:06.178 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:48:06.178 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:48:06.438 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:48:06.438 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:48:06.438 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:48:06.438 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:48:06.698 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4bfcb507-c881-48b5-8d94-746a7b20e6fc == \4\b\f\c\b\5\0\7\-\c\8\8\1\-\4\8\b\5\-\8\d\9\4\-\7\4\6\a\7\b\2\0\e\6\f\c ]] 00:48:06.698 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:48:06.698 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:48:06.698 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:48:06.957 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3c595b94-fee1-478c-9342-6e2242ebb63f == \3\c\5\9\5\b\9\4\-\f\e\e\1\-\4\7\8\c\-\9\3\4\2\-\6\e\2\2\4\2\e\b\b\6\3\f ]] 00:48:06.957 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:48:07.217 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4bfcb507-c881-48b5-8d94-746a7b20e6fc 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4BFCB507C88148B58D94746A7B20E6FC 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4BFCB507C88148B58D94746A7B20E6FC 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:48:07.477 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4BFCB507C88148B58D94746A7B20E6FC 00:48:07.812 [2024-12-09 10:59:08.777527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:48:07.812 [2024-12-09 10:59:08.777572] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:48:07.812 [2024-12-09 10:59:08.777590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:48:07.812 request: 00:48:07.812 { 00:48:07.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:07.812 "namespace": { 00:48:07.812 "bdev_name": "invalid", 00:48:07.812 "nsid": 1, 00:48:07.813 "nguid": "4BFCB507C88148B58D94746A7B20E6FC", 00:48:07.813 "no_auto_visible": false, 00:48:07.813 "hide_metadata": false 00:48:07.813 }, 00:48:07.813 "method": "nvmf_subsystem_add_ns", 00:48:07.813 "req_id": 1 00:48:07.813 } 00:48:07.813 Got JSON-RPC error response 00:48:07.813 response: 00:48:07.813 { 00:48:07.813 "code": -32602, 00:48:07.813 "message": "Invalid parameters" 00:48:07.813 } 00:48:07.813 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:48:07.813 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:07.813 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:07.813 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:07.813 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4bfcb507-c881-48b5-8d94-746a7b20e6fc 00:48:07.813 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:48:07.813 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4BFCB507C88148B58D94746A7B20E6FC -i 00:48:08.212 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:48:10.195 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:48:10.195 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:48:10.195 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2370075 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2370075 ']' 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2370075 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2370075 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2370075' 00:48:10.459 killing process with pid 2370075 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2370075 00:48:10.459 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2370075 00:48:10.733 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:10.998 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:48:10.998 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:48:10.998 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:10.998 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:48:10.998 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:10.998 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:48:10.998 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:10.998 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:10.998 rmmod nvme_tcp 00:48:10.998 rmmod nvme_fabrics 00:48:10.998 rmmod nvme_keyring 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2368266 ']' 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2368266 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2368266 ']' 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2368266 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2368266 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2368266' 00:48:11.261 killing process with pid 2368266 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2368266 00:48:11.261 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2368266 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:11.530 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:13.493 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:13.493 00:48:13.493 real 0m29.531s 00:48:13.493 user 0m37.272s 00:48:13.493 sys 0m8.508s 00:48:13.493 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:13.493 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:48:13.493 ************************************ 00:48:13.493 END TEST nvmf_ns_masking 00:48:13.493 ************************************ 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:48:13.775 ************************************ 00:48:13.775 START TEST nvmf_nvme_cli 00:48:13.775 ************************************ 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:48:13.775 * Looking for test storage... 00:48:13.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:13.775 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:14.048 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:48:14.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:14.048 --rc genhtml_branch_coverage=1 00:48:14.048 --rc genhtml_function_coverage=1 00:48:14.048 --rc genhtml_legend=1 00:48:14.048 --rc geninfo_all_blocks=1 00:48:14.048 --rc geninfo_unexecuted_blocks=1 00:48:14.048 00:48:14.048 ' 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:48:14.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:14.049 --rc genhtml_branch_coverage=1 00:48:14.049 --rc genhtml_function_coverage=1 00:48:14.049 --rc genhtml_legend=1 00:48:14.049 --rc geninfo_all_blocks=1 00:48:14.049 --rc geninfo_unexecuted_blocks=1 00:48:14.049 00:48:14.049 ' 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:48:14.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:14.049 --rc genhtml_branch_coverage=1 00:48:14.049 --rc genhtml_function_coverage=1 00:48:14.049 --rc genhtml_legend=1 00:48:14.049 --rc geninfo_all_blocks=1 00:48:14.049 --rc geninfo_unexecuted_blocks=1 00:48:14.049 00:48:14.049 ' 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:48:14.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:14.049 --rc genhtml_branch_coverage=1 00:48:14.049 --rc genhtml_function_coverage=1 00:48:14.049 --rc genhtml_legend=1 00:48:14.049 --rc geninfo_all_blocks=1 00:48:14.049 --rc geninfo_unexecuted_blocks=1 00:48:14.049 00:48:14.049 ' 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:14.049 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:14.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:48:14.049 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:48:20.771 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:48:20.772 Found 0000:af:00.0 (0x8086 - 0x159b) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:48:20.772 Found 0000:af:00.1 (0x8086 - 0x159b) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:48:20.772 Found net devices under 0000:af:00.0: cvl_0_0 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:48:20.772 Found net devices under 0000:af:00.1: cvl_0_1 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:48:20.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:20.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:48:20.772 00:48:20.772 --- 10.0.0.2 ping statistics --- 00:48:20.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:20.772 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:48:20.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:20.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:48:20.772 00:48:20.772 --- 10.0.0.1 ping statistics --- 00:48:20.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:20.772 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2374332 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2374332 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2374332 ']' 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:20.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:20.772 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:48:20.772 [2024-12-09 10:59:21.629745] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:48:20.772 [2024-12-09 10:59:21.629821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:20.772 [2024-12-09 10:59:21.761365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:20.772 [2024-12-09 10:59:21.819531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:20.772 [2024-12-09 10:59:21.819572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:20.772 [2024-12-09 10:59:21.819588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:20.772 [2024-12-09 10:59:21.819601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:20.773 [2024-12-09 10:59:21.819613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:20.773 [2024-12-09 10:59:21.821350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:20.773 [2024-12-09 10:59:21.821438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:20.773 [2024-12-09 10:59:21.821456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:48:20.773 [2024-12-09 10:59:21.821460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:21.748 [2024-12-09 10:59:22.668925] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:21.748 Malloc0 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:21.748 Malloc1 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:21.748 [2024-12-09 10:59:22.754682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:21.748 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -a 10.0.0.2 -s 4420 00:48:22.014 00:48:22.014 Discovery Log Number of Records 2, Generation counter 2 00:48:22.014 =====Discovery Log Entry 0====== 00:48:22.014 trtype: tcp 00:48:22.014 adrfam: ipv4 00:48:22.014 subtype: current discovery subsystem 00:48:22.014 treq: not required 00:48:22.014 portid: 0 00:48:22.014 trsvcid: 4420 00:48:22.014 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:48:22.014 traddr: 10.0.0.2 00:48:22.014 eflags: explicit discovery connections, duplicate discovery information 00:48:22.014 sectype: none 00:48:22.014 =====Discovery Log Entry 1====== 00:48:22.014 trtype: tcp 00:48:22.014 adrfam: ipv4 00:48:22.014 subtype: nvme subsystem 00:48:22.014 treq: not required 00:48:22.014 portid: 0 00:48:22.014 trsvcid: 4420 00:48:22.014 subnqn: nqn.2016-06.io.spdk:cnode1 00:48:22.014 traddr: 10.0.0.2 00:48:22.014 eflags: none 00:48:22.014 sectype: none 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:48:22.014 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:48:22.978 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:48:22.978 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:48:22.978 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:48:22.978 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:48:22.978 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:48:22.978 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:48:24.932 /dev/nvme0n2 ]] 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:24.932 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:48:25.204 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:48:25.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:25.484 rmmod nvme_tcp 00:48:25.484 rmmod nvme_fabrics 00:48:25.484 rmmod nvme_keyring 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2374332 ']' 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2374332 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2374332 ']' 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2374332 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2374332 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:25.484 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2374332' 00:48:25.484 killing process with pid 2374332 00:48:25.485 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2374332 00:48:25.485 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2374332 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:26.077 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:28.097 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:28.097 00:48:28.097 real 0m14.284s 00:48:28.097 user 0m22.507s 00:48:28.097 sys 0m5.675s 00:48:28.097 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:28.097 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:48:28.097 ************************************ 00:48:28.097 END TEST nvmf_nvme_cli 00:48:28.097 ************************************ 00:48:28.097 10:59:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:48:28.097 10:59:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:48:28.097 10:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:28.097 10:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:28.097 10:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:48:28.097 ************************************ 00:48:28.097 START TEST nvmf_vfio_user 00:48:28.097 ************************************ 00:48:28.097 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:48:28.097 * Looking for test storage... 00:48:28.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:48:28.097 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:48:28.098 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:48:28.098 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:28.371 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:48:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:28.372 --rc genhtml_branch_coverage=1 00:48:28.372 --rc genhtml_function_coverage=1 00:48:28.372 --rc genhtml_legend=1 00:48:28.372 --rc geninfo_all_blocks=1 00:48:28.372 --rc geninfo_unexecuted_blocks=1 00:48:28.372 00:48:28.372 ' 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:48:28.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:28.372 --rc genhtml_branch_coverage=1 00:48:28.372 --rc genhtml_function_coverage=1 00:48:28.372 --rc genhtml_legend=1 00:48:28.372 --rc geninfo_all_blocks=1 00:48:28.372 --rc geninfo_unexecuted_blocks=1 00:48:28.372 00:48:28.372 ' 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:48:28.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:28.372 --rc genhtml_branch_coverage=1 00:48:28.372 --rc genhtml_function_coverage=1 00:48:28.372 --rc genhtml_legend=1 00:48:28.372 --rc geninfo_all_blocks=1 00:48:28.372 --rc geninfo_unexecuted_blocks=1 00:48:28.372 00:48:28.372 ' 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:48:28.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:28.372 --rc genhtml_branch_coverage=1 00:48:28.372 --rc genhtml_function_coverage=1 00:48:28.372 --rc genhtml_legend=1 00:48:28.372 --rc geninfo_all_blocks=1 00:48:28.372 --rc geninfo_unexecuted_blocks=1 00:48:28.372 00:48:28.372 ' 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:28.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2375559 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2375559' 00:48:28.372 Process pid: 2375559 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2375559 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2375559 ']' 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:28.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:48:28.372 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:48:28.372 [2024-12-09 10:59:29.407012] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:48:28.372 [2024-12-09 10:59:29.407088] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:28.372 [2024-12-09 10:59:29.524856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:28.654 [2024-12-09 10:59:29.580961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:28.654 [2024-12-09 10:59:29.581007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:28.654 [2024-12-09 10:59:29.581022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:28.654 [2024-12-09 10:59:29.581036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:28.654 [2024-12-09 10:59:29.581048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:28.654 [2024-12-09 10:59:29.582840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:28.654 [2024-12-09 10:59:29.582881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:48:28.654 [2024-12-09 10:59:29.582862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:28.654 [2024-12-09 10:59:29.582886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:29.241 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:29.241 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:48:29.241 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:48:30.624 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:48:30.624 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:48:30.624 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:48:30.624 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:48:30.624 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:48:30.624 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:48:30.883 Malloc1 00:48:30.883 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:48:31.143 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:48:31.404 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:48:31.404 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:48:31.404 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:48:31.404 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:48:31.664 Malloc2 00:48:31.664 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:48:31.924 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:48:32.185 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:48:32.449 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:48:32.449 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:48:32.449 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:48:32.449 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:48:32.449 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:48:32.449 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:48:32.449 [2024-12-09 10:59:33.436156] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:48:32.449 [2024-12-09 10:59:33.436200] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376118 ] 00:48:32.449 [2024-12-09 10:59:33.506750] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:48:32.449 [2024-12-09 10:59:33.516108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:48:32.449 [2024-12-09 10:59:33.516142] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f630555e000 00:48:32.450 [2024-12-09 10:59:33.517109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:48:32.450 [2024-12-09 10:59:33.518110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:48:32.450 [2024-12-09 10:59:33.519112] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:48:32.450 [2024-12-09 10:59:33.520116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:48:32.450 [2024-12-09 10:59:33.521124] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:48:32.450 [2024-12-09 10:59:33.522118] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:48:32.450 [2024-12-09 10:59:33.523124] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:48:32.450 [2024-12-09 10:59:33.524128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:48:32.450 [2024-12-09 10:59:33.525143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:48:32.450 [2024-12-09 10:59:33.525160] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6305553000 00:48:32.450 [2024-12-09 10:59:33.526767] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:48:32.450 [2024-12-09 10:59:33.549628] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:48:32.450 [2024-12-09 10:59:33.549679] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:48:32.450 [2024-12-09 10:59:33.555323] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:48:32.450 [2024-12-09 10:59:33.555383] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:48:32.450 [2024-12-09 10:59:33.555487] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:48:32.450 [2024-12-09 10:59:33.555514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:48:32.450 [2024-12-09 10:59:33.555525] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:48:32.450 [2024-12-09 10:59:33.556325] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:48:32.450 [2024-12-09 10:59:33.556346] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:48:32.450 [2024-12-09 10:59:33.556360] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:48:32.450 [2024-12-09 10:59:33.557333] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:48:32.450 [2024-12-09 10:59:33.557348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:48:32.450 [2024-12-09 10:59:33.557363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:48:32.450 [2024-12-09 10:59:33.558338] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:48:32.450 [2024-12-09 10:59:33.558355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:48:32.450 [2024-12-09 10:59:33.559338] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:48:32.450 [2024-12-09 10:59:33.559353] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:48:32.450 [2024-12-09 10:59:33.559363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:48:32.450 [2024-12-09 10:59:33.559380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:48:32.450 [2024-12-09 10:59:33.559490] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:48:32.450 [2024-12-09 10:59:33.559500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:48:32.450 [2024-12-09 10:59:33.559510] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:48:32.450 [2024-12-09 10:59:33.560348] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:48:32.450 [2024-12-09 10:59:33.561351] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:48:32.450 [2024-12-09 10:59:33.562360] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:48:32.450 [2024-12-09 10:59:33.563351] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:48:32.450 [2024-12-09 10:59:33.563432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:48:32.450 [2024-12-09 10:59:33.564365] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:48:32.450 [2024-12-09 10:59:33.564380] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:48:32.450 [2024-12-09 10:59:33.564390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:48:32.450 [2024-12-09 10:59:33.564419] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:48:32.450 [2024-12-09 10:59:33.564433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:48:32.450 [2024-12-09 10:59:33.564464] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:48:32.450 [2024-12-09 10:59:33.564474] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:48:32.450 [2024-12-09 10:59:33.564481] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:32.450 [2024-12-09 10:59:33.564501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:48:32.450 [2024-12-09 10:59:33.564539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:48:32.450 [2024-12-09 10:59:33.564559] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:48:32.450 [2024-12-09 10:59:33.564569] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:48:32.450 [2024-12-09 10:59:33.564579] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:48:32.450 [2024-12-09 10:59:33.564588] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:48:32.450 [2024-12-09 10:59:33.564598] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:48:32.450 [2024-12-09 10:59:33.564607] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:48:32.450 [2024-12-09 10:59:33.564617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:48:32.450 [2024-12-09 10:59:33.564634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:48:32.450 [2024-12-09 10:59:33.564655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:48:32.450 [2024-12-09 10:59:33.564668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:48:32.450 [2024-12-09 10:59:33.564686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:32.450 [2024-12-09 10:59:33.564700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:32.450 [2024-12-09 10:59:33.564714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:32.450 [2024-12-09 10:59:33.564728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:32.450 [2024-12-09 10:59:33.564737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:48:32.450 [2024-12-09 10:59:33.564754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:48:32.450 [2024-12-09 10:59:33.564769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:48:32.450 [2024-12-09 10:59:33.564782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:48:32.450 [2024-12-09 10:59:33.564794] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:48:32.450 [2024-12-09 10:59:33.564804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:48:32.450 [2024-12-09 10:59:33.564817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:48:32.450 [2024-12-09 10:59:33.564828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:48:32.450 [2024-12-09 10:59:33.564842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.564859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.564936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.564952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.564965] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:48:32.451 [2024-12-09 10:59:33.564974] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:48:32.451 [2024-12-09 10:59:33.564981] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:32.451 [2024-12-09 10:59:33.564992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.565008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.565024] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:48:32.451 [2024-12-09 10:59:33.565049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565076] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:48:32.451 [2024-12-09 10:59:33.565086] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:48:32.451 [2024-12-09 10:59:33.565093] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:32.451 [2024-12-09 10:59:33.565103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.565127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.565148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565175] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:48:32.451 [2024-12-09 10:59:33.565184] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:48:32.451 [2024-12-09 10:59:33.565191] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:32.451 [2024-12-09 10:59:33.565202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.565214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.565229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565301] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:48:32.451 [2024-12-09 10:59:33.565311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:48:32.451 [2024-12-09 10:59:33.565321] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:48:32.451 [2024-12-09 10:59:33.565347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.565363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.565386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.565399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.565420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.565432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.565453] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.565465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.565490] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:48:32.451 [2024-12-09 10:59:33.565500] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:48:32.451 [2024-12-09 10:59:33.565507] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:48:32.451 [2024-12-09 10:59:33.565514] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:48:32.451 [2024-12-09 10:59:33.565521] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:48:32.451 [2024-12-09 10:59:33.565531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:48:32.451 [2024-12-09 10:59:33.565544] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:48:32.451 [2024-12-09 10:59:33.565554] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:48:32.451 [2024-12-09 10:59:33.565561] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:32.451 [2024-12-09 10:59:33.565571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.565583] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:48:32.451 [2024-12-09 10:59:33.565593] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:48:32.451 [2024-12-09 10:59:33.565600] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:32.451 [2024-12-09 10:59:33.565610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.565622] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:48:32.451 [2024-12-09 10:59:33.565632] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:48:32.451 [2024-12-09 10:59:33.565639] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:32.451 [2024-12-09 10:59:33.565657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:48:32.451 [2024-12-09 10:59:33.565670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.565694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.565714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:48:32.451 [2024-12-09 10:59:33.565728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:48:32.452 ===================================================== 00:48:32.452 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:48:32.452 ===================================================== 00:48:32.452 Controller Capabilities/Features 00:48:32.452 ================================ 00:48:32.452 Vendor ID: 4e58 00:48:32.452 Subsystem Vendor ID: 4e58 00:48:32.452 Serial Number: SPDK1 00:48:32.452 Model Number: SPDK bdev Controller 00:48:32.452 Firmware Version: 25.01 00:48:32.452 Recommended Arb Burst: 6 00:48:32.452 IEEE OUI Identifier: 8d 6b 50 00:48:32.452 Multi-path I/O 00:48:32.452 May have multiple subsystem ports: Yes 00:48:32.452 May have multiple controllers: Yes 00:48:32.452 Associated with SR-IOV VF: No 00:48:32.452 Max Data Transfer Size: 131072 00:48:32.452 Max Number of Namespaces: 32 00:48:32.452 Max Number of I/O Queues: 127 00:48:32.452 NVMe Specification Version (VS): 1.3 00:48:32.452 NVMe Specification Version (Identify): 1.3 00:48:32.452 Maximum Queue Entries: 256 00:48:32.452 Contiguous Queues Required: Yes 00:48:32.452 Arbitration Mechanisms Supported 00:48:32.452 Weighted Round Robin: Not Supported 00:48:32.452 Vendor Specific: Not Supported 00:48:32.452 Reset Timeout: 15000 ms 00:48:32.452 Doorbell Stride: 4 bytes 00:48:32.452 NVM Subsystem Reset: Not Supported 00:48:32.452 Command Sets Supported 00:48:32.452 NVM Command Set: Supported 00:48:32.452 Boot Partition: Not Supported 00:48:32.452 Memory Page Size Minimum: 4096 bytes 00:48:32.452 Memory Page Size Maximum: 4096 bytes 00:48:32.452 Persistent Memory Region: Not Supported 00:48:32.452 Optional Asynchronous Events Supported 00:48:32.452 Namespace Attribute Notices: Supported 00:48:32.452 Firmware Activation Notices: Not Supported 00:48:32.452 ANA Change Notices: Not Supported 00:48:32.452 PLE Aggregate Log Change Notices: Not Supported 00:48:32.452 LBA Status Info Alert Notices: Not Supported 00:48:32.452 EGE Aggregate Log Change Notices: Not Supported 00:48:32.452 Normal NVM Subsystem Shutdown event: Not Supported 00:48:32.452 Zone Descriptor Change Notices: Not Supported 00:48:32.452 Discovery Log Change Notices: Not Supported 00:48:32.452 Controller Attributes 00:48:32.452 128-bit Host Identifier: Supported 00:48:32.452 Non-Operational Permissive Mode: Not Supported 00:48:32.452 NVM Sets: Not Supported 00:48:32.452 Read Recovery Levels: Not Supported 00:48:32.452 Endurance Groups: Not Supported 00:48:32.452 Predictable Latency Mode: Not Supported 00:48:32.452 Traffic Based Keep ALive: Not Supported 00:48:32.452 Namespace Granularity: Not Supported 00:48:32.452 SQ Associations: Not Supported 00:48:32.452 UUID List: Not Supported 00:48:32.452 Multi-Domain Subsystem: Not Supported 00:48:32.452 Fixed Capacity Management: Not Supported 00:48:32.452 Variable Capacity Management: Not Supported 00:48:32.452 Delete Endurance Group: Not Supported 00:48:32.452 Delete NVM Set: Not Supported 00:48:32.452 Extended LBA Formats Supported: Not Supported 00:48:32.452 Flexible Data Placement Supported: Not Supported 00:48:32.452 00:48:32.452 Controller Memory Buffer Support 00:48:32.452 ================================ 00:48:32.452 Supported: No 00:48:32.452 00:48:32.452 Persistent Memory Region Support 00:48:32.452 ================================ 00:48:32.452 Supported: No 00:48:32.452 00:48:32.452 Admin Command Set Attributes 00:48:32.452 ============================ 00:48:32.452 Security Send/Receive: Not Supported 00:48:32.452 Format NVM: Not Supported 00:48:32.452 Firmware Activate/Download: Not Supported 00:48:32.452 Namespace Management: Not Supported 00:48:32.452 Device Self-Test: Not Supported 00:48:32.452 Directives: Not Supported 00:48:32.452 NVMe-MI: Not Supported 00:48:32.452 Virtualization Management: Not Supported 00:48:32.452 Doorbell Buffer Config: Not Supported 00:48:32.452 Get LBA Status Capability: Not Supported 00:48:32.452 Command & Feature Lockdown Capability: Not Supported 00:48:32.452 Abort Command Limit: 4 00:48:32.452 Async Event Request Limit: 4 00:48:32.452 Number of Firmware Slots: N/A 00:48:32.452 Firmware Slot 1 Read-Only: N/A 00:48:32.452 Firmware Activation Without Reset: N/A 00:48:32.452 Multiple Update Detection Support: N/A 00:48:32.452 Firmware Update Granularity: No Information Provided 00:48:32.452 Per-Namespace SMART Log: No 00:48:32.452 Asymmetric Namespace Access Log Page: Not Supported 00:48:32.452 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:48:32.452 Command Effects Log Page: Supported 00:48:32.452 Get Log Page Extended Data: Supported 00:48:32.452 Telemetry Log Pages: Not Supported 00:48:32.452 Persistent Event Log Pages: Not Supported 00:48:32.452 Supported Log Pages Log Page: May Support 00:48:32.452 Commands Supported & Effects Log Page: Not Supported 00:48:32.452 Feature Identifiers & Effects Log Page:May Support 00:48:32.452 NVMe-MI Commands & Effects Log Page: May Support 00:48:32.452 Data Area 4 for Telemetry Log: Not Supported 00:48:32.452 Error Log Page Entries Supported: 128 00:48:32.452 Keep Alive: Supported 00:48:32.452 Keep Alive Granularity: 10000 ms 00:48:32.452 00:48:32.452 NVM Command Set Attributes 00:48:32.452 ========================== 00:48:32.452 Submission Queue Entry Size 00:48:32.452 Max: 64 00:48:32.452 Min: 64 00:48:32.452 Completion Queue Entry Size 00:48:32.452 Max: 16 00:48:32.452 Min: 16 00:48:32.452 Number of Namespaces: 32 00:48:32.452 Compare Command: Supported 00:48:32.452 Write Uncorrectable Command: Not Supported 00:48:32.452 Dataset Management Command: Supported 00:48:32.452 Write Zeroes Command: Supported 00:48:32.452 Set Features Save Field: Not Supported 00:48:32.452 Reservations: Not Supported 00:48:32.452 Timestamp: Not Supported 00:48:32.452 Copy: Supported 00:48:32.452 Volatile Write Cache: Present 00:48:32.452 Atomic Write Unit (Normal): 1 00:48:32.452 Atomic Write Unit (PFail): 1 00:48:32.452 Atomic Compare & Write Unit: 1 00:48:32.452 Fused Compare & Write: Supported 00:48:32.452 Scatter-Gather List 00:48:32.452 SGL Command Set: Supported (Dword aligned) 00:48:32.452 SGL Keyed: Not Supported 00:48:32.452 SGL Bit Bucket Descriptor: Not Supported 00:48:32.452 SGL Metadata Pointer: Not Supported 00:48:32.452 Oversized SGL: Not Supported 00:48:32.452 SGL Metadata Address: Not Supported 00:48:32.452 SGL Offset: Not Supported 00:48:32.452 Transport SGL Data Block: Not Supported 00:48:32.452 Replay Protected Memory Block: Not Supported 00:48:32.452 00:48:32.452 Firmware Slot Information 00:48:32.452 ========================= 00:48:32.452 Active slot: 1 00:48:32.452 Slot 1 Firmware Revision: 25.01 00:48:32.452 00:48:32.452 00:48:32.452 Commands Supported and Effects 00:48:32.452 ============================== 00:48:32.452 Admin Commands 00:48:32.452 -------------- 00:48:32.452 Get Log Page (02h): Supported 00:48:32.452 Identify (06h): Supported 00:48:32.452 Abort (08h): Supported 00:48:32.452 Set Features (09h): Supported 00:48:32.452 Get Features (0Ah): Supported 00:48:32.452 Asynchronous Event Request (0Ch): Supported 00:48:32.452 Keep Alive (18h): Supported 00:48:32.452 I/O Commands 00:48:32.452 ------------ 00:48:32.452 Flush (00h): Supported LBA-Change 00:48:32.452 Write (01h): Supported LBA-Change 00:48:32.452 Read (02h): Supported 00:48:32.452 Compare (05h): Supported 00:48:32.452 Write Zeroes (08h): Supported LBA-Change 00:48:32.452 Dataset Management (09h): Supported LBA-Change 00:48:32.452 Copy (19h): Supported LBA-Change 00:48:32.452 00:48:32.452 Error Log 00:48:32.452 ========= 00:48:32.452 00:48:32.452 Arbitration 00:48:32.452 =========== 00:48:32.452 Arbitration Burst: 1 00:48:32.452 00:48:32.453 Power Management 00:48:32.453 ================ 00:48:32.453 Number of Power States: 1 00:48:32.453 Current Power State: Power State #0 00:48:32.453 Power State #0: 00:48:32.453 Max Power: 0.00 W 00:48:32.453 Non-Operational State: Operational 00:48:32.453 Entry Latency: Not Reported 00:48:32.453 Exit Latency: Not Reported 00:48:32.453 Relative Read Throughput: 0 00:48:32.453 Relative Read Latency: 0 00:48:32.453 Relative Write Throughput: 0 00:48:32.453 Relative Write Latency: 0 00:48:32.453 Idle Power: Not Reported 00:48:32.453 Active Power: Not Reported 00:48:32.453 Non-Operational Permissive Mode: Not Supported 00:48:32.453 00:48:32.453 Health Information 00:48:32.453 ================== 00:48:32.453 Critical Warnings: 00:48:32.453 Available Spare Space: OK 00:48:32.453 Temperature: OK 00:48:32.453 Device Reliability: OK 00:48:32.453 Read Only: No 00:48:32.453 Volatile Memory Backup: OK 00:48:32.453 Current Temperature: 0 Kelvin (-273 Celsius) 00:48:32.453 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:48:32.453 Available Spare: 0% 00:48:32.453 Available Sp[2024-12-09 10:59:33.565859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:48:32.453 [2024-12-09 10:59:33.565875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:48:32.453 [2024-12-09 10:59:33.565922] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:48:32.453 [2024-12-09 10:59:33.565939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:32.453 [2024-12-09 10:59:33.565952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:32.453 [2024-12-09 10:59:33.565964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:32.453 [2024-12-09 10:59:33.565977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:32.453 [2024-12-09 10:59:33.566377] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:48:32.453 [2024-12-09 10:59:33.566396] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:48:32.453 [2024-12-09 10:59:33.567381] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:48:32.453 [2024-12-09 10:59:33.567439] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:48:32.453 [2024-12-09 10:59:33.567451] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:48:32.453 [2024-12-09 10:59:33.568392] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:48:32.453 [2024-12-09 10:59:33.568413] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:48:32.453 [2024-12-09 10:59:33.568477] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:48:32.453 [2024-12-09 10:59:33.571658] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:48:32.713 are Threshold: 0% 00:48:32.713 Life Percentage Used: 0% 00:48:32.713 Data Units Read: 0 00:48:32.713 Data Units Written: 0 00:48:32.713 Host Read Commands: 0 00:48:32.713 Host Write Commands: 0 00:48:32.713 Controller Busy Time: 0 minutes 00:48:32.713 Power Cycles: 0 00:48:32.713 Power On Hours: 0 hours 00:48:32.713 Unsafe Shutdowns: 0 00:48:32.713 Unrecoverable Media Errors: 0 00:48:32.713 Lifetime Error Log Entries: 0 00:48:32.713 Warning Temperature Time: 0 minutes 00:48:32.713 Critical Temperature Time: 0 minutes 00:48:32.713 00:48:32.713 Number of Queues 00:48:32.713 ================ 00:48:32.713 Number of I/O Submission Queues: 127 00:48:32.713 Number of I/O Completion Queues: 127 00:48:32.713 00:48:32.713 Active Namespaces 00:48:32.713 ================= 00:48:32.714 Namespace ID:1 00:48:32.714 Error Recovery Timeout: Unlimited 00:48:32.714 Command Set Identifier: NVM (00h) 00:48:32.714 Deallocate: Supported 00:48:32.714 Deallocated/Unwritten Error: Not Supported 00:48:32.714 Deallocated Read Value: Unknown 00:48:32.714 Deallocate in Write Zeroes: Not Supported 00:48:32.714 Deallocated Guard Field: 0xFFFF 00:48:32.714 Flush: Supported 00:48:32.714 Reservation: Supported 00:48:32.714 Namespace Sharing Capabilities: Multiple Controllers 00:48:32.714 Size (in LBAs): 131072 (0GiB) 00:48:32.714 Capacity (in LBAs): 131072 (0GiB) 00:48:32.714 Utilization (in LBAs): 131072 (0GiB) 00:48:32.714 NGUID: 8EF4FA9BAE724BA486436C328B9C574A 00:48:32.714 UUID: 8ef4fa9b-ae72-4ba4-8643-6c328b9c574a 00:48:32.714 Thin Provisioning: Not Supported 00:48:32.714 Per-NS Atomic Units: Yes 00:48:32.714 Atomic Boundary Size (Normal): 0 00:48:32.714 Atomic Boundary Size (PFail): 0 00:48:32.714 Atomic Boundary Offset: 0 00:48:32.714 Maximum Single Source Range Length: 65535 00:48:32.714 Maximum Copy Length: 65535 00:48:32.714 Maximum Source Range Count: 1 00:48:32.714 NGUID/EUI64 Never Reused: No 00:48:32.714 Namespace Write Protected: No 00:48:32.714 Number of LBA Formats: 1 00:48:32.714 Current LBA Format: LBA Format #00 00:48:32.714 LBA Format #00: Data Size: 512 Metadata Size: 0 00:48:32.714 00:48:32.714 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:48:32.974 [2024-12-09 10:59:33.958179] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:48:38.262 Initializing NVMe Controllers 00:48:38.263 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:48:38.263 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:48:38.263 Initialization complete. Launching workers. 00:48:38.263 ======================================================== 00:48:38.263 Latency(us) 00:48:38.263 Device Information : IOPS MiB/s Average min max 00:48:38.263 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39919.46 155.94 3206.27 959.09 10604.43 00:48:38.263 ======================================================== 00:48:38.263 Total : 39919.46 155.94 3206.27 959.09 10604.43 00:48:38.263 00:48:38.263 [2024-12-09 10:59:38.975940] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:48:38.263 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:48:38.263 [2024-12-09 10:59:39.311456] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:48:43.553 Initializing NVMe Controllers 00:48:43.553 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:48:43.553 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:48:43.553 Initialization complete. Launching workers. 00:48:43.553 ======================================================== 00:48:43.553 Latency(us) 00:48:43.553 Device Information : IOPS MiB/s Average min max 00:48:43.553 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16019.54 62.58 7989.57 4985.05 11979.96 00:48:43.553 ======================================================== 00:48:43.553 Total : 16019.54 62.58 7989.57 4985.05 11979.96 00:48:43.553 00:48:43.553 [2024-12-09 10:59:44.342619] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:48:43.553 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:48:43.553 [2024-12-09 10:59:44.679081] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:48:48.844 [2024-12-09 10:59:49.751946] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:48:48.844 Initializing NVMe Controllers 00:48:48.844 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:48:48.844 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:48:48.844 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:48:48.844 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:48:48.844 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:48:48.844 Initialization complete. Launching workers. 00:48:48.844 Starting thread on core 2 00:48:48.844 Starting thread on core 3 00:48:48.844 Starting thread on core 1 00:48:48.844 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:48:49.105 [2024-12-09 10:59:50.218746] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:48:52.407 [2024-12-09 10:59:53.293935] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:48:52.407 Initializing NVMe Controllers 00:48:52.407 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:48:52.407 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:48:52.407 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:48:52.407 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:48:52.407 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:48:52.407 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:48:52.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:48:52.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:48:52.407 Initialization complete. Launching workers. 00:48:52.407 Starting thread on core 1 with urgent priority queue 00:48:52.407 Starting thread on core 2 with urgent priority queue 00:48:52.407 Starting thread on core 3 with urgent priority queue 00:48:52.407 Starting thread on core 0 with urgent priority queue 00:48:52.407 SPDK bdev Controller (SPDK1 ) core 0: 5215.67 IO/s 19.17 secs/100000 ios 00:48:52.407 SPDK bdev Controller (SPDK1 ) core 1: 5619.00 IO/s 17.80 secs/100000 ios 00:48:52.407 SPDK bdev Controller (SPDK1 ) core 2: 5561.00 IO/s 17.98 secs/100000 ios 00:48:52.407 SPDK bdev Controller (SPDK1 ) core 3: 5661.67 IO/s 17.66 secs/100000 ios 00:48:52.407 ======================================================== 00:48:52.407 00:48:52.407 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:48:52.668 [2024-12-09 10:59:53.806088] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:48:52.668 Initializing NVMe Controllers 00:48:52.668 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:48:52.668 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:48:52.668 Namespace ID: 1 size: 0GB 00:48:52.668 Initialization complete. 00:48:52.668 INFO: using host memory buffer for IO 00:48:52.668 Hello world! 00:48:52.668 [2024-12-09 10:59:53.840734] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:48:52.928 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:48:53.500 [2024-12-09 10:59:54.373121] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:48:54.444 Initializing NVMe Controllers 00:48:54.444 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:48:54.444 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:48:54.444 Initialization complete. Launching workers. 00:48:54.444 submit (in ns) avg, min, max = 10094.7, 4420.9, 4003846.1 00:48:54.444 complete (in ns) avg, min, max = 20128.3, 2593.0, 4002857.4 00:48:54.444 00:48:54.444 Submit histogram 00:48:54.444 ================ 00:48:54.444 Range in us Cumulative Count 00:48:54.444 4.397 - 4.424: 0.0436% ( 7) 00:48:54.444 4.424 - 4.452: 1.4960% ( 233) 00:48:54.444 4.452 - 4.480: 6.6820% ( 832) 00:48:54.444 4.480 - 4.508: 15.6330% ( 1436) 00:48:54.444 4.508 - 4.536: 27.8065% ( 1953) 00:48:54.444 4.536 - 4.563: 42.2677% ( 2320) 00:48:54.444 4.563 - 4.591: 55.2328% ( 2080) 00:48:54.444 4.591 - 4.619: 66.3716% ( 1787) 00:48:54.444 4.619 - 4.647: 75.1979% ( 1416) 00:48:54.444 4.647 - 4.675: 80.1097% ( 788) 00:48:54.444 4.675 - 4.703: 83.6128% ( 562) 00:48:54.444 4.703 - 4.730: 85.3955% ( 286) 00:48:54.444 4.730 - 4.758: 86.5923% ( 192) 00:48:54.444 4.758 - 4.786: 87.6831% ( 175) 00:48:54.444 4.786 - 4.814: 89.2850% ( 257) 00:48:54.444 4.814 - 4.842: 91.0491% ( 283) 00:48:54.444 4.842 - 4.870: 93.2245% ( 349) 00:48:54.444 4.870 - 4.897: 94.9760% ( 281) 00:48:54.444 4.897 - 4.925: 96.5156% ( 247) 00:48:54.444 4.925 - 4.953: 97.5628% ( 168) 00:48:54.444 4.953 - 4.981: 98.2734% ( 114) 00:48:54.444 4.981 - 5.009: 98.7409% ( 75) 00:48:54.444 5.009 - 5.037: 99.0899% ( 56) 00:48:54.444 5.037 - 5.064: 99.2021% ( 18) 00:48:54.444 5.064 - 5.092: 99.2707% ( 11) 00:48:54.444 5.092 - 5.120: 99.2956% ( 4) 00:48:54.444 5.120 - 5.148: 99.3268% ( 5) 00:48:54.444 5.148 - 5.176: 99.3330% ( 1) 00:48:54.444 5.176 - 5.203: 99.3393% ( 1) 00:48:54.444 5.203 - 5.231: 99.3455% ( 1) 00:48:54.444 5.231 - 5.259: 99.3580% ( 2) 00:48:54.444 5.343 - 5.370: 99.3704% ( 2) 00:48:54.444 5.370 - 5.398: 99.3767% ( 1) 00:48:54.444 5.426 - 5.454: 99.3829% ( 1) 00:48:54.444 5.454 - 5.482: 99.3891% ( 1) 00:48:54.444 5.537 - 5.565: 99.4016% ( 2) 00:48:54.444 5.593 - 5.621: 99.4141% ( 2) 00:48:54.444 5.621 - 5.649: 99.4203% ( 1) 00:48:54.444 5.704 - 5.732: 99.4265% ( 1) 00:48:54.444 5.732 - 5.760: 99.4328% ( 1) 00:48:54.444 5.788 - 5.816: 99.4390% ( 1) 00:48:54.444 5.843 - 5.871: 99.4452% ( 1) 00:48:54.444 6.094 - 6.122: 99.4515% ( 1) 00:48:54.444 8.070 - 8.125: 99.4577% ( 1) 00:48:54.444 8.403 - 8.459: 99.4702% ( 2) 00:48:54.444 8.459 - 8.515: 99.4764% ( 1) 00:48:54.444 8.682 - 8.737: 99.4826% ( 1) 00:48:54.444 8.960 - 9.016: 99.4889% ( 1) 00:48:54.444 9.016 - 9.071: 99.5013% ( 2) 00:48:54.444 9.071 - 9.127: 99.5076% ( 1) 00:48:54.444 9.183 - 9.238: 99.5138% ( 1) 00:48:54.444 9.238 - 9.294: 99.5200% ( 1) 00:48:54.444 9.294 - 9.350: 99.5325% ( 2) 00:48:54.444 9.350 - 9.405: 99.5512% ( 3) 00:48:54.444 9.405 - 9.461: 99.5574% ( 1) 00:48:54.444 9.461 - 9.517: 99.5637% ( 1) 00:48:54.444 9.517 - 9.572: 99.5699% ( 1) 00:48:54.444 9.572 - 9.628: 99.5761% ( 1) 00:48:54.444 9.683 - 9.739: 99.5824% ( 1) 00:48:54.444 9.739 - 9.795: 99.5886% ( 1) 00:48:54.444 9.795 - 9.850: 99.6011% ( 2) 00:48:54.444 9.850 - 9.906: 99.6198% ( 3) 00:48:54.444 9.906 - 9.962: 99.6260% ( 1) 00:48:54.444 10.017 - 10.073: 99.6385% ( 2) 00:48:54.444 10.129 - 10.184: 99.6509% ( 2) 00:48:54.444 10.184 - 10.240: 99.6634% ( 2) 00:48:54.444 10.240 - 10.296: 99.6696% ( 1) 00:48:54.444 10.296 - 10.351: 99.6821% ( 2) 00:48:54.444 10.351 - 10.407: 99.6883% ( 1) 00:48:54.444 10.463 - 10.518: 99.7008% ( 2) 00:48:54.444 10.518 - 10.574: 99.7070% ( 1) 00:48:54.444 10.574 - 10.630: 99.7195% ( 2) 00:48:54.444 10.685 - 10.741: 99.7320% ( 2) 00:48:54.444 10.797 - 10.852: 99.7382% ( 1) 00:48:54.444 10.852 - 10.908: 99.7444% ( 1) 00:48:54.444 11.075 - 11.130: 99.7507% ( 1) 00:48:54.444 11.186 - 11.242: 99.7631% ( 2) 00:48:54.444 11.353 - 11.409: 99.7694% ( 1) 00:48:54.444 11.464 - 11.520: 99.7756% ( 1) 00:48:54.444 11.576 - 11.631: 99.7818% ( 1) 00:48:54.444 11.631 - 11.687: 99.8005% ( 3) 00:48:54.444 11.854 - 11.910: 99.8068% ( 1) 00:48:54.444 11.910 - 11.965: 99.8255% ( 3) 00:48:54.444 12.188 - 12.243: 99.8317% ( 1) 00:48:54.444 12.243 - 12.299: 99.8379% ( 1) 00:48:54.444 12.744 - 12.800: 99.8442% ( 1) 00:48:54.444 13.857 - 13.913: 99.8504% ( 1) 00:48:54.444 15.249 - 15.360: 99.8566% ( 1) 00:48:54.444 15.471 - 15.583: 99.8629% ( 1) 00:48:54.444 3647.221 - 3675.715: 99.8691% ( 1) 00:48:54.444 3989.148 - 4017.642: 100.0000% ( 21) 00:48:54.444 00:48:54.444 Complete histogram 00:48:54.444 ================== 00:48:54.444 Range in us Cumulative Count 00:48:54.444 2.588 - 2.602: 0.2244% ( 36) 00:48:54.444 2.602 - 2.616: 7.2742% ( 1131) 00:48:54.444 2.616 - 2.630: 27.1957% ( 3196) 00:48:54.444 2.630 - 2.643: 36.5642% ( 1503) 00:48:54.444 2.643 - 2.657: 38.8768% ( 371) 00:48:54.444 2.657 - 2.671: 47.3540% ( 1360) 00:48:54.444 2.671 - 2.685: 68.6842% ( 3422) 00:48:54.444 2.685 - 2.699: 82.5718% ( 2228) 00:48:54.444 2.699 - 2.713: 88.4560% ( 944) 00:48:54.444 2.713 - 2.727: 92.1399% ( 591) 00:48:54.444 2.727 - 2.741: 93.7107% ( 252) 00:48:54.444 2.741 - 2.755: 95.2503% ( 247) 00:48:54.444 2.755 - 2.769: 97.2262% ( 317) 00:48:54.444 2.769 - 2.783: 98.3918% ( 187) 00:48:54.444 2.783 - 2.797: 98.8281% ( 70) 00:48:54.444 2.797 - 2.810: 98.9902% ( 26) 00:48:54.444 2.810 - 2.824: 99.0650% ( 12) 00:48:54.444 2.824 - 2.838: 99.0899% ( 4) 00:48:54.444 2.838 - 2.852: 99.1086% ( 3) 00:48:54.444 2.852 - 2.866: 99.1273% ( 3) 00:48:54.444 2.866 - 2.880: 99.1398% ( 2) 00:48:54.444 2.894 - 2.908: 99.1523% ( 2) 00:48:54.444 2.908 - 2.922: 99.1585% ( 1) 00:48:54.444 2.922 - 2.936: 99.1647% ( 1) 00:48:54.444 2.963 - 2.977: 99.1710% ( 1) 00:48:54.444 2.977 - 2.991: 99.1772% ( 1) 00:48:54.444 2.991 - 3.005: 99.1959% ( 3) 00:48:54.444 3.005 - 3.019: 99.2021% ( 1) 00:48:54.444 3.019 - 3.033: 99.2084% ( 1) 00:48:54.444 3.047 - 3.061: 99.2146% ( 1) 00:48:54.444 3.061 - 3.075: 99.2208% ( 1) 00:48:54.444 3.103 - 3.117: 99.2271% ( 1) 00:48:54.444 3.144 - 3.158: 99.2333% ( 1) 00:48:54.444 3.158 - 3.172: 99.2395% ( 1) 00:48:54.444 3.200 - 3.214: 99.2458% ( 1) 00:48:54.444 3.242 - 3.256: 99.2520% ( 1) 00:48:54.444 3.270 - 3.283: 99.2582% ( 1) 00:48:54.444 3.311 - 3.325: 99.2645% ( 1) 00:48:54.444 3.353 - 3.367: 99.2707% ( 1) 00:48:54.444 5.983 - 6.010: 99.2769% ( 1) 00:48:54.445 6.038 - 6.066: 99.2832% ( 1) 00:48:54.445 6.344 - 6.372: 99.2894% ( 1) 00:48:54.445 6.650 - 6.678: 99.2956% ( 1) 00:48:54.445 6.790 - 6.817: 99.3019% ( 1) 00:48:54.445 7.012 - 7.040: 99.3081% ( 1) 00:48:54.445 7.096 - 7.123: 99.3143% ( 1) 00:48:54.445 7.346 - 7.402: 99.3206% ( 1) 00:48:54.445 7.624 - 7.680: 99.3268% ( 1) 00:48:54.445 7.680 - 7.736: 99.3330% ( 1) 00:48:54.445 7.736 - 7.791: 99.3455% ( 2) 00:48:54.445 8.014 - 8.070: 99.3517% ( 1) 00:48:54.445 8.125 - 8.181: 99.3642% ( 2) 00:48:54.445 8.237 - 8.292: 99.3704% ( 1) 00:48:54.445 8.292 - 8.348: 99.3829% ( 2) 00:48:54.445 8.348 - 8.403: 99.3954% ( 2) 00:48:54.445 8.403 - 8.459: 99.4016% ( 1) 00:48:54.445 8.570 - 8.626: 99.4078% ( 1) 00:48:54.445 8.682 - 8.737: 99.4141% ( 1) 00:48:54.445 8.737 - 8.793: 99.4203% ( 1) 00:48:54.445 8.849 - 8.904: 99.4265% ( 1) 00:48:54.445 9.016 - 9.071: 99.4328% ( 1) 00:48:54.445 9.071 - 9.127: 99.4452% ( 2) 00:48:54.445 9.238 - 9.294: 99.4515% ( 1) 00:48:54.445 9.294 - 9.350: 99.4702% ( 3) 00:48:54.445 9.405 - 9.461: 99.4764% ( 1) 00:48:54.445 9.461 - 9.517: 99.4826% ( 1) 00:48:54.445 9.850 - 9.906: 99.4889% ( 1) 00:48:54.445 9.906 - 9.962: 99.5076% ( 3) 00:48:54.445 9.962 - 10.017: 99.5138% ( 1) 00:48:54.445 10.296 - 10.351: 99.5200% ( 1) 00:48:54.445 10.741 - 10.797: 99.5263% ( 1) 00:48:54.445 11.075 - 11.130: 99.5325% ( 1) 00:48:54.445 11.242 - 11.297: 99.5387% ( 1) 00:48:54.445 12.466 - 12.5[2024-12-09 10:59:55.399446] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:48:54.445 22: 99.5450% ( 1) 00:48:54.445 15.137 - 15.249: 99.5512% ( 1) 00:48:54.445 16.584 - 16.696: 99.5574% ( 1) 00:48:54.445 17.809 - 17.920: 99.5637% ( 1) 00:48:54.445 3989.148 - 4017.642: 100.0000% ( 70) 00:48:54.445 00:48:54.445 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:48:54.445 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:48:54.445 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:48:54.445 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:48:54.445 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:48:54.706 [ 00:48:54.706 { 00:48:54.706 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:48:54.706 "subtype": "Discovery", 00:48:54.706 "listen_addresses": [], 00:48:54.706 "allow_any_host": true, 00:48:54.706 "hosts": [] 00:48:54.706 }, 00:48:54.706 { 00:48:54.706 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:48:54.706 "subtype": "NVMe", 00:48:54.706 "listen_addresses": [ 00:48:54.706 { 00:48:54.706 "trtype": "VFIOUSER", 00:48:54.706 "adrfam": "IPv4", 00:48:54.706 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:48:54.706 "trsvcid": "0" 00:48:54.706 } 00:48:54.706 ], 00:48:54.706 "allow_any_host": true, 00:48:54.706 "hosts": [], 00:48:54.706 "serial_number": "SPDK1", 00:48:54.706 "model_number": "SPDK bdev Controller", 00:48:54.706 "max_namespaces": 32, 00:48:54.706 "min_cntlid": 1, 00:48:54.706 "max_cntlid": 65519, 00:48:54.706 "namespaces": [ 00:48:54.706 { 00:48:54.706 "nsid": 1, 00:48:54.706 "bdev_name": "Malloc1", 00:48:54.706 "name": "Malloc1", 00:48:54.706 "nguid": "8EF4FA9BAE724BA486436C328B9C574A", 00:48:54.706 "uuid": "8ef4fa9b-ae72-4ba4-8643-6c328b9c574a" 00:48:54.706 } 00:48:54.706 ] 00:48:54.706 }, 00:48:54.706 { 00:48:54.706 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:48:54.706 "subtype": "NVMe", 00:48:54.706 "listen_addresses": [ 00:48:54.706 { 00:48:54.706 "trtype": "VFIOUSER", 00:48:54.706 "adrfam": "IPv4", 00:48:54.706 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:48:54.706 "trsvcid": "0" 00:48:54.706 } 00:48:54.706 ], 00:48:54.706 "allow_any_host": true, 00:48:54.706 "hosts": [], 00:48:54.706 "serial_number": "SPDK2", 00:48:54.706 "model_number": "SPDK bdev Controller", 00:48:54.706 "max_namespaces": 32, 00:48:54.706 "min_cntlid": 1, 00:48:54.706 "max_cntlid": 65519, 00:48:54.706 "namespaces": [ 00:48:54.706 { 00:48:54.706 "nsid": 1, 00:48:54.706 "bdev_name": "Malloc2", 00:48:54.706 "name": "Malloc2", 00:48:54.706 "nguid": "BEDB6F0E16E845E99EA2BC487138BC6D", 00:48:54.706 "uuid": "bedb6f0e-16e8-45e9-9ea2-bc487138bc6d" 00:48:54.706 } 00:48:54.706 ] 00:48:54.706 } 00:48:54.706 ] 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2378952 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:48:54.706 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:48:54.967 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:48:54.967 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:48:54.967 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=3 00:48:54.967 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:48:54.967 [2024-12-09 10:59:55.899067] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:48:54.967 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:48:54.967 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:48:54.967 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:48:54.967 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:48:54.967 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:48:55.227 Malloc3 00:48:55.227 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:48:55.491 [2024-12-09 10:59:56.484330] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:48:55.491 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:48:55.491 Asynchronous Event Request test 00:48:55.491 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:48:55.491 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:48:55.491 Registering asynchronous event callbacks... 00:48:55.491 Starting namespace attribute notice tests for all controllers... 00:48:55.491 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:48:55.491 aer_cb - Changed Namespace 00:48:55.491 Cleaning up... 00:48:55.752 [ 00:48:55.752 { 00:48:55.752 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:48:55.752 "subtype": "Discovery", 00:48:55.752 "listen_addresses": [], 00:48:55.752 "allow_any_host": true, 00:48:55.752 "hosts": [] 00:48:55.752 }, 00:48:55.752 { 00:48:55.752 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:48:55.752 "subtype": "NVMe", 00:48:55.752 "listen_addresses": [ 00:48:55.752 { 00:48:55.752 "trtype": "VFIOUSER", 00:48:55.752 "adrfam": "IPv4", 00:48:55.752 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:48:55.752 "trsvcid": "0" 00:48:55.752 } 00:48:55.752 ], 00:48:55.752 "allow_any_host": true, 00:48:55.752 "hosts": [], 00:48:55.752 "serial_number": "SPDK1", 00:48:55.752 "model_number": "SPDK bdev Controller", 00:48:55.752 "max_namespaces": 32, 00:48:55.752 "min_cntlid": 1, 00:48:55.752 "max_cntlid": 65519, 00:48:55.752 "namespaces": [ 00:48:55.752 { 00:48:55.752 "nsid": 1, 00:48:55.752 "bdev_name": "Malloc1", 00:48:55.752 "name": "Malloc1", 00:48:55.752 "nguid": "8EF4FA9BAE724BA486436C328B9C574A", 00:48:55.752 "uuid": "8ef4fa9b-ae72-4ba4-8643-6c328b9c574a" 00:48:55.752 }, 00:48:55.752 { 00:48:55.752 "nsid": 2, 00:48:55.752 "bdev_name": "Malloc3", 00:48:55.752 "name": "Malloc3", 00:48:55.752 "nguid": "560C48EC763B4D7A8667BEA0F539DB8B", 00:48:55.752 "uuid": "560c48ec-763b-4d7a-8667-bea0f539db8b" 00:48:55.752 } 00:48:55.752 ] 00:48:55.752 }, 00:48:55.752 { 00:48:55.752 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:48:55.752 "subtype": "NVMe", 00:48:55.752 "listen_addresses": [ 00:48:55.752 { 00:48:55.752 "trtype": "VFIOUSER", 00:48:55.752 "adrfam": "IPv4", 00:48:55.753 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:48:55.753 "trsvcid": "0" 00:48:55.753 } 00:48:55.753 ], 00:48:55.753 "allow_any_host": true, 00:48:55.753 "hosts": [], 00:48:55.753 "serial_number": "SPDK2", 00:48:55.753 "model_number": "SPDK bdev Controller", 00:48:55.753 "max_namespaces": 32, 00:48:55.753 "min_cntlid": 1, 00:48:55.753 "max_cntlid": 65519, 00:48:55.753 "namespaces": [ 00:48:55.753 { 00:48:55.753 "nsid": 1, 00:48:55.753 "bdev_name": "Malloc2", 00:48:55.753 "name": "Malloc2", 00:48:55.753 "nguid": "BEDB6F0E16E845E99EA2BC487138BC6D", 00:48:55.753 "uuid": "bedb6f0e-16e8-45e9-9ea2-bc487138bc6d" 00:48:55.753 } 00:48:55.753 ] 00:48:55.753 } 00:48:55.753 ] 00:48:55.753 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2378952 00:48:55.753 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:48:55.753 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:48:55.753 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:48:55.753 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:48:55.753 [2024-12-09 10:59:56.800026] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:48:55.753 [2024-12-09 10:59:56.800071] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379129 ] 00:48:55.753 [2024-12-09 10:59:56.872678] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:48:55.753 [2024-12-09 10:59:56.875037] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:48:55.753 [2024-12-09 10:59:56.875072] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fde032e0000 00:48:55.753 [2024-12-09 10:59:56.876042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:48:55.753 [2024-12-09 10:59:56.877049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:48:55.753 [2024-12-09 10:59:56.878057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:48:55.753 [2024-12-09 10:59:56.879069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:48:55.753 [2024-12-09 10:59:56.880076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:48:55.753 [2024-12-09 10:59:56.881078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:48:55.753 [2024-12-09 10:59:56.882083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:48:55.753 [2024-12-09 10:59:56.883095] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:48:55.753 [2024-12-09 10:59:56.884109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:48:55.753 [2024-12-09 10:59:56.884129] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fde032d5000 00:48:55.753 [2024-12-09 10:59:56.885738] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:48:55.753 [2024-12-09 10:59:56.907636] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:48:55.753 [2024-12-09 10:59:56.907684] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:48:55.753 [2024-12-09 10:59:56.912802] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:48:55.753 [2024-12-09 10:59:56.912863] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:48:55.753 [2024-12-09 10:59:56.912970] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:48:55.753 [2024-12-09 10:59:56.912996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:48:55.753 [2024-12-09 10:59:56.913007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:48:55.753 [2024-12-09 10:59:56.913804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:48:55.753 [2024-12-09 10:59:56.913827] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:48:55.753 [2024-12-09 10:59:56.913842] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:48:55.753 [2024-12-09 10:59:56.914809] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:48:55.753 [2024-12-09 10:59:56.914828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:48:55.753 [2024-12-09 10:59:56.914842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:48:55.753 [2024-12-09 10:59:56.915814] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:48:55.753 [2024-12-09 10:59:56.915832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:48:55.753 [2024-12-09 10:59:56.916815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:48:55.753 [2024-12-09 10:59:56.916835] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:48:55.753 [2024-12-09 10:59:56.916847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:48:55.753 [2024-12-09 10:59:56.916863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:48:55.753 [2024-12-09 10:59:56.916976] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:48:55.753 [2024-12-09 10:59:56.916986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:48:55.753 [2024-12-09 10:59:56.916996] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:48:55.753 [2024-12-09 10:59:56.917829] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:48:55.753 [2024-12-09 10:59:56.918836] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:48:55.753 [2024-12-09 10:59:56.919843] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:48:55.753 [2024-12-09 10:59:56.920834] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:48:55.753 [2024-12-09 10:59:56.920895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:48:55.753 [2024-12-09 10:59:56.921865] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:48:55.753 [2024-12-09 10:59:56.921883] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:48:55.753 [2024-12-09 10:59:56.921894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:48:55.753 [2024-12-09 10:59:56.921924] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:48:55.753 [2024-12-09 10:59:56.921939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:48:55.753 [2024-12-09 10:59:56.921965] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:48:55.753 [2024-12-09 10:59:56.921975] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:48:55.753 [2024-12-09 10:59:56.921986] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:55.753 [2024-12-09 10:59:56.922005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:48:56.016 [2024-12-09 10:59:56.930660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:48:56.016 [2024-12-09 10:59:56.930690] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:48:56.016 [2024-12-09 10:59:56.930701] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:48:56.016 [2024-12-09 10:59:56.930711] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:48:56.016 [2024-12-09 10:59:56.930721] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:48:56.016 [2024-12-09 10:59:56.930732] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:48:56.016 [2024-12-09 10:59:56.930741] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:48:56.016 [2024-12-09 10:59:56.930756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.930777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.930795] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:48:56.016 [2024-12-09 10:59:56.938654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:48:56.016 [2024-12-09 10:59:56.938685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:56.016 [2024-12-09 10:59:56.938700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:56.016 [2024-12-09 10:59:56.938714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:56.016 [2024-12-09 10:59:56.938728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:56.016 [2024-12-09 10:59:56.938738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.938756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.938771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:48:56.016 [2024-12-09 10:59:56.946655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:48:56.016 [2024-12-09 10:59:56.946670] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:48:56.016 [2024-12-09 10:59:56.946680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.946694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.946705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.946723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:48:56.016 [2024-12-09 10:59:56.954653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:48:56.016 [2024-12-09 10:59:56.954740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.954756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.954770] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:48:56.016 [2024-12-09 10:59:56.954780] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:48:56.016 [2024-12-09 10:59:56.954787] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:56.016 [2024-12-09 10:59:56.954798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:48:56.016 [2024-12-09 10:59:56.962651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:48:56.016 [2024-12-09 10:59:56.962671] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:48:56.016 [2024-12-09 10:59:56.962695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.962710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:48:56.016 [2024-12-09 10:59:56.962723] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:48:56.016 [2024-12-09 10:59:56.962732] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:48:56.016 [2024-12-09 10:59:56.962740] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:56.016 [2024-12-09 10:59:56.962750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:48:56.016 [2024-12-09 10:59:56.970651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:48:56.016 [2024-12-09 10:59:56.970676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:48:56.017 [2024-12-09 10:59:56.970692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:48:56.017 [2024-12-09 10:59:56.970705] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:48:56.017 [2024-12-09 10:59:56.970714] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:48:56.017 [2024-12-09 10:59:56.970721] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:56.017 [2024-12-09 10:59:56.970732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:48:56.017 [2024-12-09 10:59:56.978652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:48:56.017 [2024-12-09 10:59:56.978670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:48:56.017 [2024-12-09 10:59:56.978684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:48:56.017 [2024-12-09 10:59:56.978701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:48:56.017 [2024-12-09 10:59:56.978715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:48:56.017 [2024-12-09 10:59:56.978725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:48:56.017 [2024-12-09 10:59:56.978735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:48:56.017 [2024-12-09 10:59:56.978745] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:48:56.017 [2024-12-09 10:59:56.978755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:48:56.017 [2024-12-09 10:59:56.978765] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:48:56.017 [2024-12-09 10:59:56.978790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:48:56.017 [2024-12-09 10:59:56.986657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:48:56.017 [2024-12-09 10:59:56.986682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:48:56.017 [2024-12-09 10:59:56.994651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:48:56.017 [2024-12-09 10:59:56.994675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:48:56.017 [2024-12-09 10:59:57.002652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:48:56.017 [2024-12-09 10:59:57.002675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:48:56.017 [2024-12-09 10:59:57.010652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:48:56.017 [2024-12-09 10:59:57.010682] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:48:56.017 [2024-12-09 10:59:57.010692] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:48:56.017 [2024-12-09 10:59:57.010700] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:48:56.017 [2024-12-09 10:59:57.010707] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:48:56.017 [2024-12-09 10:59:57.010714] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:48:56.017 [2024-12-09 10:59:57.010724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:48:56.017 [2024-12-09 10:59:57.010737] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:48:56.017 [2024-12-09 10:59:57.010747] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:48:56.017 [2024-12-09 10:59:57.010754] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:56.017 [2024-12-09 10:59:57.010764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:48:56.017 [2024-12-09 10:59:57.010776] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:48:56.017 [2024-12-09 10:59:57.010786] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:48:56.017 [2024-12-09 10:59:57.010795] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:56.017 [2024-12-09 10:59:57.010805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:48:56.017 [2024-12-09 10:59:57.010818] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:48:56.017 [2024-12-09 10:59:57.010828] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:48:56.017 [2024-12-09 10:59:57.010835] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:48:56.017 [2024-12-09 10:59:57.010845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:48:56.017 [2024-12-09 10:59:57.018655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:48:56.017 [2024-12-09 10:59:57.018680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:48:56.017 [2024-12-09 10:59:57.018700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:48:56.017 [2024-12-09 10:59:57.018714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:48:56.017 ===================================================== 00:48:56.017 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:48:56.017 ===================================================== 00:48:56.017 Controller Capabilities/Features 00:48:56.017 ================================ 00:48:56.017 Vendor ID: 4e58 00:48:56.017 Subsystem Vendor ID: 4e58 00:48:56.017 Serial Number: SPDK2 00:48:56.017 Model Number: SPDK bdev Controller 00:48:56.017 Firmware Version: 25.01 00:48:56.017 Recommended Arb Burst: 6 00:48:56.017 IEEE OUI Identifier: 8d 6b 50 00:48:56.017 Multi-path I/O 00:48:56.017 May have multiple subsystem ports: Yes 00:48:56.017 May have multiple controllers: Yes 00:48:56.017 Associated with SR-IOV VF: No 00:48:56.017 Max Data Transfer Size: 131072 00:48:56.017 Max Number of Namespaces: 32 00:48:56.017 Max Number of I/O Queues: 127 00:48:56.017 NVMe Specification Version (VS): 1.3 00:48:56.017 NVMe Specification Version (Identify): 1.3 00:48:56.017 Maximum Queue Entries: 256 00:48:56.017 Contiguous Queues Required: Yes 00:48:56.017 Arbitration Mechanisms Supported 00:48:56.017 Weighted Round Robin: Not Supported 00:48:56.017 Vendor Specific: Not Supported 00:48:56.017 Reset Timeout: 15000 ms 00:48:56.017 Doorbell Stride: 4 bytes 00:48:56.017 NVM Subsystem Reset: Not Supported 00:48:56.017 Command Sets Supported 00:48:56.017 NVM Command Set: Supported 00:48:56.017 Boot Partition: Not Supported 00:48:56.017 Memory Page Size Minimum: 4096 bytes 00:48:56.017 Memory Page Size Maximum: 4096 bytes 00:48:56.017 Persistent Memory Region: Not Supported 00:48:56.017 Optional Asynchronous Events Supported 00:48:56.017 Namespace Attribute Notices: Supported 00:48:56.017 Firmware Activation Notices: Not Supported 00:48:56.017 ANA Change Notices: Not Supported 00:48:56.017 PLE Aggregate Log Change Notices: Not Supported 00:48:56.017 LBA Status Info Alert Notices: Not Supported 00:48:56.017 EGE Aggregate Log Change Notices: Not Supported 00:48:56.017 Normal NVM Subsystem Shutdown event: Not Supported 00:48:56.017 Zone Descriptor Change Notices: Not Supported 00:48:56.017 Discovery Log Change Notices: Not Supported 00:48:56.017 Controller Attributes 00:48:56.017 128-bit Host Identifier: Supported 00:48:56.017 Non-Operational Permissive Mode: Not Supported 00:48:56.017 NVM Sets: Not Supported 00:48:56.017 Read Recovery Levels: Not Supported 00:48:56.017 Endurance Groups: Not Supported 00:48:56.017 Predictable Latency Mode: Not Supported 00:48:56.017 Traffic Based Keep ALive: Not Supported 00:48:56.017 Namespace Granularity: Not Supported 00:48:56.017 SQ Associations: Not Supported 00:48:56.017 UUID List: Not Supported 00:48:56.017 Multi-Domain Subsystem: Not Supported 00:48:56.017 Fixed Capacity Management: Not Supported 00:48:56.017 Variable Capacity Management: Not Supported 00:48:56.017 Delete Endurance Group: Not Supported 00:48:56.017 Delete NVM Set: Not Supported 00:48:56.017 Extended LBA Formats Supported: Not Supported 00:48:56.017 Flexible Data Placement Supported: Not Supported 00:48:56.017 00:48:56.017 Controller Memory Buffer Support 00:48:56.017 ================================ 00:48:56.017 Supported: No 00:48:56.017 00:48:56.017 Persistent Memory Region Support 00:48:56.017 ================================ 00:48:56.017 Supported: No 00:48:56.017 00:48:56.017 Admin Command Set Attributes 00:48:56.017 ============================ 00:48:56.017 Security Send/Receive: Not Supported 00:48:56.017 Format NVM: Not Supported 00:48:56.017 Firmware Activate/Download: Not Supported 00:48:56.017 Namespace Management: Not Supported 00:48:56.017 Device Self-Test: Not Supported 00:48:56.017 Directives: Not Supported 00:48:56.017 NVMe-MI: Not Supported 00:48:56.017 Virtualization Management: Not Supported 00:48:56.017 Doorbell Buffer Config: Not Supported 00:48:56.017 Get LBA Status Capability: Not Supported 00:48:56.017 Command & Feature Lockdown Capability: Not Supported 00:48:56.017 Abort Command Limit: 4 00:48:56.017 Async Event Request Limit: 4 00:48:56.017 Number of Firmware Slots: N/A 00:48:56.017 Firmware Slot 1 Read-Only: N/A 00:48:56.017 Firmware Activation Without Reset: N/A 00:48:56.017 Multiple Update Detection Support: N/A 00:48:56.017 Firmware Update Granularity: No Information Provided 00:48:56.017 Per-Namespace SMART Log: No 00:48:56.017 Asymmetric Namespace Access Log Page: Not Supported 00:48:56.018 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:48:56.018 Command Effects Log Page: Supported 00:48:56.018 Get Log Page Extended Data: Supported 00:48:56.018 Telemetry Log Pages: Not Supported 00:48:56.018 Persistent Event Log Pages: Not Supported 00:48:56.018 Supported Log Pages Log Page: May Support 00:48:56.018 Commands Supported & Effects Log Page: Not Supported 00:48:56.018 Feature Identifiers & Effects Log Page:May Support 00:48:56.018 NVMe-MI Commands & Effects Log Page: May Support 00:48:56.018 Data Area 4 for Telemetry Log: Not Supported 00:48:56.018 Error Log Page Entries Supported: 128 00:48:56.018 Keep Alive: Supported 00:48:56.018 Keep Alive Granularity: 10000 ms 00:48:56.018 00:48:56.018 NVM Command Set Attributes 00:48:56.018 ========================== 00:48:56.018 Submission Queue Entry Size 00:48:56.018 Max: 64 00:48:56.018 Min: 64 00:48:56.018 Completion Queue Entry Size 00:48:56.018 Max: 16 00:48:56.018 Min: 16 00:48:56.018 Number of Namespaces: 32 00:48:56.018 Compare Command: Supported 00:48:56.018 Write Uncorrectable Command: Not Supported 00:48:56.018 Dataset Management Command: Supported 00:48:56.018 Write Zeroes Command: Supported 00:48:56.018 Set Features Save Field: Not Supported 00:48:56.018 Reservations: Not Supported 00:48:56.018 Timestamp: Not Supported 00:48:56.018 Copy: Supported 00:48:56.018 Volatile Write Cache: Present 00:48:56.018 Atomic Write Unit (Normal): 1 00:48:56.018 Atomic Write Unit (PFail): 1 00:48:56.018 Atomic Compare & Write Unit: 1 00:48:56.018 Fused Compare & Write: Supported 00:48:56.018 Scatter-Gather List 00:48:56.018 SGL Command Set: Supported (Dword aligned) 00:48:56.018 SGL Keyed: Not Supported 00:48:56.018 SGL Bit Bucket Descriptor: Not Supported 00:48:56.018 SGL Metadata Pointer: Not Supported 00:48:56.018 Oversized SGL: Not Supported 00:48:56.018 SGL Metadata Address: Not Supported 00:48:56.018 SGL Offset: Not Supported 00:48:56.018 Transport SGL Data Block: Not Supported 00:48:56.018 Replay Protected Memory Block: Not Supported 00:48:56.018 00:48:56.018 Firmware Slot Information 00:48:56.018 ========================= 00:48:56.018 Active slot: 1 00:48:56.018 Slot 1 Firmware Revision: 25.01 00:48:56.018 00:48:56.018 00:48:56.018 Commands Supported and Effects 00:48:56.018 ============================== 00:48:56.018 Admin Commands 00:48:56.018 -------------- 00:48:56.018 Get Log Page (02h): Supported 00:48:56.018 Identify (06h): Supported 00:48:56.018 Abort (08h): Supported 00:48:56.018 Set Features (09h): Supported 00:48:56.018 Get Features (0Ah): Supported 00:48:56.018 Asynchronous Event Request (0Ch): Supported 00:48:56.018 Keep Alive (18h): Supported 00:48:56.018 I/O Commands 00:48:56.018 ------------ 00:48:56.018 Flush (00h): Supported LBA-Change 00:48:56.018 Write (01h): Supported LBA-Change 00:48:56.018 Read (02h): Supported 00:48:56.018 Compare (05h): Supported 00:48:56.018 Write Zeroes (08h): Supported LBA-Change 00:48:56.018 Dataset Management (09h): Supported LBA-Change 00:48:56.018 Copy (19h): Supported LBA-Change 00:48:56.018 00:48:56.018 Error Log 00:48:56.018 ========= 00:48:56.018 00:48:56.018 Arbitration 00:48:56.018 =========== 00:48:56.018 Arbitration Burst: 1 00:48:56.018 00:48:56.018 Power Management 00:48:56.018 ================ 00:48:56.018 Number of Power States: 1 00:48:56.018 Current Power State: Power State #0 00:48:56.018 Power State #0: 00:48:56.018 Max Power: 0.00 W 00:48:56.018 Non-Operational State: Operational 00:48:56.018 Entry Latency: Not Reported 00:48:56.018 Exit Latency: Not Reported 00:48:56.018 Relative Read Throughput: 0 00:48:56.018 Relative Read Latency: 0 00:48:56.018 Relative Write Throughput: 0 00:48:56.018 Relative Write Latency: 0 00:48:56.018 Idle Power: Not Reported 00:48:56.018 Active Power: Not Reported 00:48:56.018 Non-Operational Permissive Mode: Not Supported 00:48:56.018 00:48:56.018 Health Information 00:48:56.018 ================== 00:48:56.018 Critical Warnings: 00:48:56.018 Available Spare Space: OK 00:48:56.018 Temperature: OK 00:48:56.018 Device Reliability: OK 00:48:56.018 Read Only: No 00:48:56.018 Volatile Memory Backup: OK 00:48:56.018 Current Temperature: 0 Kelvin (-273 Celsius) 00:48:56.018 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:48:56.018 Available Spare: 0% 00:48:56.018 Available Sp[2024-12-09 10:59:57.018849] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:48:56.018 [2024-12-09 10:59:57.026658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:48:56.018 [2024-12-09 10:59:57.026712] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:48:56.018 [2024-12-09 10:59:57.026730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:56.018 [2024-12-09 10:59:57.026742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:56.018 [2024-12-09 10:59:57.026755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:56.018 [2024-12-09 10:59:57.026767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:56.018 [2024-12-09 10:59:57.026854] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:48:56.018 [2024-12-09 10:59:57.026873] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:48:56.018 [2024-12-09 10:59:57.027856] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:48:56.018 [2024-12-09 10:59:57.027924] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:48:56.018 [2024-12-09 10:59:57.027937] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:48:56.018 [2024-12-09 10:59:57.028864] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:48:56.018 [2024-12-09 10:59:57.028886] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:48:56.018 [2024-12-09 10:59:57.028948] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:48:56.018 [2024-12-09 10:59:57.030609] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:48:56.280 are Threshold: 0% 00:48:56.280 Life Percentage Used: 0% 00:48:56.280 Data Units Read: 0 00:48:56.280 Data Units Written: 0 00:48:56.280 Host Read Commands: 0 00:48:56.280 Host Write Commands: 0 00:48:56.280 Controller Busy Time: 0 minutes 00:48:56.280 Power Cycles: 0 00:48:56.280 Power On Hours: 0 hours 00:48:56.280 Unsafe Shutdowns: 0 00:48:56.280 Unrecoverable Media Errors: 0 00:48:56.280 Lifetime Error Log Entries: 0 00:48:56.280 Warning Temperature Time: 0 minutes 00:48:56.280 Critical Temperature Time: 0 minutes 00:48:56.280 00:48:56.280 Number of Queues 00:48:56.280 ================ 00:48:56.280 Number of I/O Submission Queues: 127 00:48:56.280 Number of I/O Completion Queues: 127 00:48:56.280 00:48:56.280 Active Namespaces 00:48:56.280 ================= 00:48:56.280 Namespace ID:1 00:48:56.280 Error Recovery Timeout: Unlimited 00:48:56.280 Command Set Identifier: NVM (00h) 00:48:56.280 Deallocate: Supported 00:48:56.280 Deallocated/Unwritten Error: Not Supported 00:48:56.280 Deallocated Read Value: Unknown 00:48:56.280 Deallocate in Write Zeroes: Not Supported 00:48:56.280 Deallocated Guard Field: 0xFFFF 00:48:56.280 Flush: Supported 00:48:56.280 Reservation: Supported 00:48:56.280 Namespace Sharing Capabilities: Multiple Controllers 00:48:56.280 Size (in LBAs): 131072 (0GiB) 00:48:56.280 Capacity (in LBAs): 131072 (0GiB) 00:48:56.280 Utilization (in LBAs): 131072 (0GiB) 00:48:56.280 NGUID: BEDB6F0E16E845E99EA2BC487138BC6D 00:48:56.280 UUID: bedb6f0e-16e8-45e9-9ea2-bc487138bc6d 00:48:56.280 Thin Provisioning: Not Supported 00:48:56.280 Per-NS Atomic Units: Yes 00:48:56.280 Atomic Boundary Size (Normal): 0 00:48:56.280 Atomic Boundary Size (PFail): 0 00:48:56.280 Atomic Boundary Offset: 0 00:48:56.280 Maximum Single Source Range Length: 65535 00:48:56.280 Maximum Copy Length: 65535 00:48:56.280 Maximum Source Range Count: 1 00:48:56.280 NGUID/EUI64 Never Reused: No 00:48:56.280 Namespace Write Protected: No 00:48:56.280 Number of LBA Formats: 1 00:48:56.280 Current LBA Format: LBA Format #00 00:48:56.280 LBA Format #00: Data Size: 512 Metadata Size: 0 00:48:56.280 00:48:56.280 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:48:56.280 [2024-12-09 10:59:57.428179] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:49:01.569 Initializing NVMe Controllers 00:49:01.569 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:49:01.569 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:49:01.569 Initialization complete. Launching workers. 00:49:01.569 ======================================================== 00:49:01.569 Latency(us) 00:49:01.569 Device Information : IOPS MiB/s Average min max 00:49:01.569 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39953.14 156.07 3203.56 967.52 7579.83 00:49:01.569 ======================================================== 00:49:01.569 Total : 39953.14 156.07 3203.56 967.52 7579.83 00:49:01.569 00:49:01.569 [2024-12-09 11:00:02.530950] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:49:01.569 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:49:01.830 [2024-12-09 11:00:02.863910] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:49:07.117 Initializing NVMe Controllers 00:49:07.117 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:49:07.117 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:49:07.117 Initialization complete. Launching workers. 00:49:07.117 ======================================================== 00:49:07.117 Latency(us) 00:49:07.117 Device Information : IOPS MiB/s Average min max 00:49:07.117 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 25447.88 99.41 5029.32 1297.68 10625.27 00:49:07.117 ======================================================== 00:49:07.117 Total : 25447.88 99.41 5029.32 1297.68 10625.27 00:49:07.117 00:49:07.117 [2024-12-09 11:00:07.885036] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:49:07.117 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:49:07.117 [2024-12-09 11:00:08.236198] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:49:12.407 [2024-12-09 11:00:13.379745] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:49:12.407 Initializing NVMe Controllers 00:49:12.407 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:49:12.407 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:49:12.407 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:49:12.407 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:49:12.407 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:49:12.407 Initialization complete. Launching workers. 00:49:12.407 Starting thread on core 2 00:49:12.407 Starting thread on core 3 00:49:12.407 Starting thread on core 1 00:49:12.407 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:49:12.668 [2024-12-09 11:00:13.843173] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:49:15.971 [2024-12-09 11:00:16.913854] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:49:15.971 Initializing NVMe Controllers 00:49:15.971 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:49:15.971 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:49:15.971 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:49:15.971 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:49:15.971 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:49:15.971 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:49:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:49:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:49:15.971 Initialization complete. Launching workers. 00:49:15.971 Starting thread on core 1 with urgent priority queue 00:49:15.971 Starting thread on core 2 with urgent priority queue 00:49:15.971 Starting thread on core 3 with urgent priority queue 00:49:15.971 Starting thread on core 0 with urgent priority queue 00:49:15.971 SPDK bdev Controller (SPDK2 ) core 0: 6723.00 IO/s 14.87 secs/100000 ios 00:49:15.971 SPDK bdev Controller (SPDK2 ) core 1: 7112.67 IO/s 14.06 secs/100000 ios 00:49:15.971 SPDK bdev Controller (SPDK2 ) core 2: 6531.33 IO/s 15.31 secs/100000 ios 00:49:15.971 SPDK bdev Controller (SPDK2 ) core 3: 5053.67 IO/s 19.79 secs/100000 ios 00:49:15.971 ======================================================== 00:49:15.971 00:49:15.971 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:49:16.543 [2024-12-09 11:00:17.419514] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:49:16.543 Initializing NVMe Controllers 00:49:16.543 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:49:16.543 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:49:16.543 Namespace ID: 1 size: 0GB 00:49:16.543 Initialization complete. 00:49:16.543 INFO: using host memory buffer for IO 00:49:16.543 Hello world! 00:49:16.543 [2024-12-09 11:00:17.430586] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:49:16.543 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:49:16.804 [2024-12-09 11:00:17.947791] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:49:18.189 Initializing NVMe Controllers 00:49:18.189 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:49:18.189 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:49:18.189 Initialization complete. Launching workers. 00:49:18.189 submit (in ns) avg, min, max = 8717.4, 4391.3, 4004180.9 00:49:18.189 complete (in ns) avg, min, max = 21036.7, 2593.9, 4998190.4 00:49:18.189 00:49:18.189 Submit histogram 00:49:18.189 ================ 00:49:18.189 Range in us Cumulative Count 00:49:18.189 4.369 - 4.397: 0.0183% ( 3) 00:49:18.189 4.397 - 4.424: 1.4042% ( 227) 00:49:18.189 4.424 - 4.452: 5.9588% ( 746) 00:49:18.189 4.452 - 4.480: 14.6407% ( 1422) 00:49:18.189 4.480 - 4.508: 25.2030% ( 1730) 00:49:18.189 4.508 - 4.536: 36.3942% ( 1833) 00:49:18.189 4.536 - 4.563: 48.1348% ( 1923) 00:49:18.189 4.563 - 4.591: 59.4603% ( 1855) 00:49:18.189 4.591 - 4.619: 67.7392% ( 1356) 00:49:18.189 4.619 - 4.647: 74.7115% ( 1142) 00:49:18.189 4.647 - 4.675: 79.0280% ( 707) 00:49:18.189 4.675 - 4.703: 81.8914% ( 469) 00:49:18.189 4.703 - 4.730: 84.3885% ( 409) 00:49:18.189 4.730 - 4.758: 86.3056% ( 314) 00:49:18.189 4.758 - 4.786: 88.3326% ( 332) 00:49:18.189 4.786 - 4.814: 90.2497% ( 314) 00:49:18.189 4.814 - 4.842: 92.0996% ( 303) 00:49:18.189 4.842 - 4.870: 93.8824% ( 292) 00:49:18.189 4.870 - 4.897: 95.6713% ( 293) 00:49:18.189 4.897 - 4.925: 96.9107% ( 203) 00:49:18.189 4.925 - 4.953: 97.8692% ( 157) 00:49:18.189 4.953 - 4.981: 98.5591% ( 113) 00:49:18.189 4.981 - 5.009: 99.0476% ( 80) 00:49:18.189 5.009 - 5.037: 99.2429% ( 32) 00:49:18.189 5.037 - 5.064: 99.3284% ( 14) 00:49:18.189 5.064 - 5.092: 99.4017% ( 12) 00:49:18.189 5.092 - 5.120: 99.4566% ( 9) 00:49:18.190 5.120 - 5.148: 99.4871% ( 5) 00:49:18.190 5.176 - 5.203: 99.4933% ( 1) 00:49:18.190 5.203 - 5.231: 99.4994% ( 1) 00:49:18.190 7.958 - 8.014: 99.5055% ( 1) 00:49:18.190 8.292 - 8.348: 99.5116% ( 1) 00:49:18.190 8.348 - 8.403: 99.5177% ( 1) 00:49:18.190 8.459 - 8.515: 99.5238% ( 1) 00:49:18.190 8.570 - 8.626: 99.5299% ( 1) 00:49:18.190 8.737 - 8.793: 99.5421% ( 2) 00:49:18.190 8.849 - 8.904: 99.5482% ( 1) 00:49:18.190 8.904 - 8.960: 99.5543% ( 1) 00:49:18.190 8.960 - 9.016: 99.5604% ( 1) 00:49:18.190 9.016 - 9.071: 99.5665% ( 1) 00:49:18.190 9.071 - 9.127: 99.5726% ( 1) 00:49:18.190 9.238 - 9.294: 99.5787% ( 1) 00:49:18.190 9.350 - 9.405: 99.5848% ( 1) 00:49:18.190 9.461 - 9.517: 99.6093% ( 4) 00:49:18.190 9.517 - 9.572: 99.6215% ( 2) 00:49:18.190 9.572 - 9.628: 99.6276% ( 1) 00:49:18.190 9.628 - 9.683: 99.6337% ( 1) 00:49:18.190 9.683 - 9.739: 99.6459% ( 2) 00:49:18.190 9.739 - 9.795: 99.6520% ( 1) 00:49:18.190 9.962 - 10.017: 99.6703% ( 3) 00:49:18.190 10.129 - 10.184: 99.6764% ( 1) 00:49:18.190 10.240 - 10.296: 99.6886% ( 2) 00:49:18.190 10.296 - 10.351: 99.7008% ( 2) 00:49:18.190 10.407 - 10.463: 99.7192% ( 3) 00:49:18.190 10.463 - 10.518: 99.7253% ( 1) 00:49:18.190 10.574 - 10.630: 99.7375% ( 2) 00:49:18.190 10.630 - 10.685: 99.7497% ( 2) 00:49:18.190 10.685 - 10.741: 99.7558% ( 1) 00:49:18.190 10.741 - 10.797: 99.7680% ( 2) 00:49:18.190 10.797 - 10.852: 99.7741% ( 1) 00:49:18.190 10.852 - 10.908: 99.7863% ( 2) 00:49:18.190 11.242 - 11.297: 99.7924% ( 1) 00:49:18.190 11.297 - 11.353: 99.7985% ( 1) 00:49:18.190 11.353 - 11.409: 99.8107% ( 2) 00:49:18.190 11.631 - 11.687: 99.8168% ( 1) 00:49:18.190 11.743 - 11.798: 99.8290% ( 2) 00:49:18.190 11.854 - 11.910: 99.8352% ( 1) 00:49:18.190 11.910 - 11.965: 99.8413% ( 1) 00:49:18.190 11.965 - 12.021: 99.8474% ( 1) 00:49:18.190 12.021 - 12.077: 99.8535% ( 1) 00:49:18.190 12.077 - 12.132: 99.8596% ( 1) 00:49:18.190 12.299 - 12.355: 99.8657% ( 1) 00:49:18.190 12.410 - 12.466: 99.8718% ( 1) 00:49:18.190 12.689 - 12.744: 99.8779% ( 1) 00:49:18.190 13.301 - 13.357: 99.8840% ( 1) 00:49:18.190 15.249 - 15.360: 99.8901% ( 1) 00:49:18.190 17.141 - 17.252: 99.8962% ( 1) 00:49:18.190 3120.083 - 3134.330: 99.9023% ( 1) 00:49:18.190 3989.148 - 4017.642: 100.0000% ( 16) 00:49:18.190 00:49:18.190 [2024-12-09 11:00:19.053017] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:49:18.190 Complete histogram 00:49:18.190 ================== 00:49:18.190 Range in us Cumulative Count 00:49:18.190 2.588 - 2.602: 0.2686% ( 44) 00:49:18.190 2.602 - 2.616: 10.3608% ( 1653) 00:49:18.190 2.616 - 2.630: 47.4022% ( 6067) 00:49:18.190 2.630 - 2.643: 65.4619% ( 2958) 00:49:18.190 2.643 - 2.657: 70.9384% ( 897) 00:49:18.190 2.657 - 2.671: 80.1270% ( 1505) 00:49:18.190 2.671 - 2.685: 85.8599% ( 939) 00:49:18.190 2.685 - 2.699: 89.4377% ( 586) 00:49:18.190 2.699 - 2.713: 94.0961% ( 763) 00:49:18.190 2.713 - 2.727: 97.0328% ( 481) 00:49:18.190 2.727 - 2.741: 98.0341% ( 164) 00:49:18.190 2.741 - 2.755: 98.5652% ( 87) 00:49:18.190 2.755 - 2.769: 98.9499% ( 63) 00:49:18.190 2.769 - 2.783: 99.0659% ( 19) 00:49:18.190 2.783 - 2.797: 99.1025% ( 6) 00:49:18.190 2.797 - 2.810: 99.1269% ( 4) 00:49:18.190 2.810 - 2.824: 99.1391% ( 2) 00:49:18.190 2.824 - 2.838: 99.1636% ( 4) 00:49:18.190 2.838 - 2.852: 99.1697% ( 1) 00:49:18.190 2.866 - 2.880: 99.1758% ( 1) 00:49:18.190 2.894 - 2.908: 99.1819% ( 1) 00:49:18.190 2.908 - 2.922: 99.1941% ( 2) 00:49:18.190 3.103 - 3.117: 99.2002% ( 1) 00:49:18.190 5.983 - 6.010: 99.2063% ( 1) 00:49:18.190 6.344 - 6.372: 99.2124% ( 1) 00:49:18.190 6.817 - 6.845: 99.2185% ( 1) 00:49:18.190 7.123 - 7.179: 99.2246% ( 1) 00:49:18.190 7.346 - 7.402: 99.2307% ( 1) 00:49:18.190 7.457 - 7.513: 99.2429% ( 2) 00:49:18.190 7.569 - 7.624: 99.2551% ( 2) 00:49:18.190 7.736 - 7.791: 99.2612% ( 1) 00:49:18.190 7.791 - 7.847: 99.2796% ( 3) 00:49:18.190 7.903 - 7.958: 99.2857% ( 1) 00:49:18.190 7.958 - 8.014: 99.2918% ( 1) 00:49:18.190 8.014 - 8.070: 99.2979% ( 1) 00:49:18.190 8.070 - 8.125: 99.3101% ( 2) 00:49:18.190 8.181 - 8.237: 99.3223% ( 2) 00:49:18.190 8.292 - 8.348: 99.3345% ( 2) 00:49:18.190 8.348 - 8.403: 99.3467% ( 2) 00:49:18.190 8.403 - 8.459: 99.3528% ( 1) 00:49:18.190 8.682 - 8.737: 99.3589% ( 1) 00:49:18.190 8.737 - 8.793: 99.3773% ( 3) 00:49:18.190 8.793 - 8.849: 99.3895% ( 2) 00:49:18.190 8.849 - 8.904: 99.4017% ( 2) 00:49:18.190 8.904 - 8.960: 99.4200% ( 3) 00:49:18.190 9.016 - 9.071: 99.4322% ( 2) 00:49:18.190 9.127 - 9.183: 99.4444% ( 2) 00:49:18.190 9.350 - 9.405: 99.4505% ( 1) 00:49:18.190 9.405 - 9.461: 99.4566% ( 1) 00:49:18.190 9.461 - 9.517: 99.4627% ( 1) 00:49:18.190 9.517 - 9.572: 99.4688% ( 1) 00:49:18.190 9.572 - 9.628: 99.4749% ( 1) 00:49:18.190 9.683 - 9.739: 99.4810% ( 1) 00:49:18.190 9.739 - 9.795: 99.4871% ( 1) 00:49:18.190 9.850 - 9.906: 99.4933% ( 1) 00:49:18.190 10.073 - 10.129: 99.4994% ( 1) 00:49:18.190 10.240 - 10.296: 99.5055% ( 1) 00:49:18.190 10.852 - 10.908: 99.5116% ( 1) 00:49:18.190 12.243 - 12.299: 99.5177% ( 1) 00:49:18.190 13.134 - 13.190: 99.5238% ( 1) 00:49:18.190 15.137 - 15.249: 99.5299% ( 1) 00:49:18.190 15.583 - 15.694: 99.5360% ( 1) 00:49:18.190 67.228 - 67.673: 99.5421% ( 1) 00:49:18.190 3989.148 - 4017.642: 99.9939% ( 74) 00:49:18.190 4986.435 - 5014.929: 100.0000% ( 1) 00:49:18.190 00:49:18.190 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:49:18.190 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:49:18.190 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:49:18.190 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:49:18.190 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:49:18.190 [ 00:49:18.190 { 00:49:18.190 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:49:18.190 "subtype": "Discovery", 00:49:18.190 "listen_addresses": [], 00:49:18.190 "allow_any_host": true, 00:49:18.190 "hosts": [] 00:49:18.190 }, 00:49:18.190 { 00:49:18.190 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:49:18.190 "subtype": "NVMe", 00:49:18.190 "listen_addresses": [ 00:49:18.190 { 00:49:18.190 "trtype": "VFIOUSER", 00:49:18.190 "adrfam": "IPv4", 00:49:18.190 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:49:18.190 "trsvcid": "0" 00:49:18.190 } 00:49:18.190 ], 00:49:18.190 "allow_any_host": true, 00:49:18.190 "hosts": [], 00:49:18.190 "serial_number": "SPDK1", 00:49:18.190 "model_number": "SPDK bdev Controller", 00:49:18.190 "max_namespaces": 32, 00:49:18.190 "min_cntlid": 1, 00:49:18.190 "max_cntlid": 65519, 00:49:18.190 "namespaces": [ 00:49:18.190 { 00:49:18.190 "nsid": 1, 00:49:18.190 "bdev_name": "Malloc1", 00:49:18.190 "name": "Malloc1", 00:49:18.190 "nguid": "8EF4FA9BAE724BA486436C328B9C574A", 00:49:18.190 "uuid": "8ef4fa9b-ae72-4ba4-8643-6c328b9c574a" 00:49:18.190 }, 00:49:18.190 { 00:49:18.190 "nsid": 2, 00:49:18.190 "bdev_name": "Malloc3", 00:49:18.190 "name": "Malloc3", 00:49:18.190 "nguid": "560C48EC763B4D7A8667BEA0F539DB8B", 00:49:18.190 "uuid": "560c48ec-763b-4d7a-8667-bea0f539db8b" 00:49:18.190 } 00:49:18.190 ] 00:49:18.190 }, 00:49:18.190 { 00:49:18.190 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:49:18.190 "subtype": "NVMe", 00:49:18.190 "listen_addresses": [ 00:49:18.190 { 00:49:18.190 "trtype": "VFIOUSER", 00:49:18.190 "adrfam": "IPv4", 00:49:18.190 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:49:18.190 "trsvcid": "0" 00:49:18.190 } 00:49:18.190 ], 00:49:18.190 "allow_any_host": true, 00:49:18.190 "hosts": [], 00:49:18.190 "serial_number": "SPDK2", 00:49:18.190 "model_number": "SPDK bdev Controller", 00:49:18.190 "max_namespaces": 32, 00:49:18.191 "min_cntlid": 1, 00:49:18.191 "max_cntlid": 65519, 00:49:18.191 "namespaces": [ 00:49:18.191 { 00:49:18.191 "nsid": 1, 00:49:18.191 "bdev_name": "Malloc2", 00:49:18.191 "name": "Malloc2", 00:49:18.191 "nguid": "BEDB6F0E16E845E99EA2BC487138BC6D", 00:49:18.191 "uuid": "bedb6f0e-16e8-45e9-9ea2-bc487138bc6d" 00:49:18.191 } 00:49:18.191 ] 00:49:18.191 } 00:49:18.191 ] 00:49:18.191 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:49:18.191 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:49:18.191 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2382531 00:49:18.191 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:49:18.191 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:49:18.191 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:49:18.191 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:49:18.191 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:49:18.191 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:49:18.450 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:49:18.450 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:49:18.450 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:49:18.450 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:49:18.450 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:49:18.450 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:49:18.450 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=3 00:49:18.450 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:49:18.450 [2024-12-09 11:00:19.565141] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:49:18.711 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:49:18.711 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:49:18.711 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:49:18.711 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:49:18.711 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:49:18.971 Malloc4 00:49:18.971 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:49:19.231 [2024-12-09 11:00:20.158416] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:49:19.231 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:49:19.231 Asynchronous Event Request test 00:49:19.231 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:49:19.231 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:49:19.231 Registering asynchronous event callbacks... 00:49:19.231 Starting namespace attribute notice tests for all controllers... 00:49:19.231 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:49:19.231 aer_cb - Changed Namespace 00:49:19.231 Cleaning up... 00:49:19.492 [ 00:49:19.492 { 00:49:19.492 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:49:19.492 "subtype": "Discovery", 00:49:19.492 "listen_addresses": [], 00:49:19.492 "allow_any_host": true, 00:49:19.492 "hosts": [] 00:49:19.492 }, 00:49:19.492 { 00:49:19.492 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:49:19.492 "subtype": "NVMe", 00:49:19.492 "listen_addresses": [ 00:49:19.492 { 00:49:19.492 "trtype": "VFIOUSER", 00:49:19.492 "adrfam": "IPv4", 00:49:19.492 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:49:19.492 "trsvcid": "0" 00:49:19.492 } 00:49:19.492 ], 00:49:19.492 "allow_any_host": true, 00:49:19.492 "hosts": [], 00:49:19.492 "serial_number": "SPDK1", 00:49:19.492 "model_number": "SPDK bdev Controller", 00:49:19.492 "max_namespaces": 32, 00:49:19.492 "min_cntlid": 1, 00:49:19.492 "max_cntlid": 65519, 00:49:19.492 "namespaces": [ 00:49:19.492 { 00:49:19.492 "nsid": 1, 00:49:19.492 "bdev_name": "Malloc1", 00:49:19.492 "name": "Malloc1", 00:49:19.492 "nguid": "8EF4FA9BAE724BA486436C328B9C574A", 00:49:19.492 "uuid": "8ef4fa9b-ae72-4ba4-8643-6c328b9c574a" 00:49:19.492 }, 00:49:19.492 { 00:49:19.492 "nsid": 2, 00:49:19.492 "bdev_name": "Malloc3", 00:49:19.492 "name": "Malloc3", 00:49:19.492 "nguid": "560C48EC763B4D7A8667BEA0F539DB8B", 00:49:19.492 "uuid": "560c48ec-763b-4d7a-8667-bea0f539db8b" 00:49:19.492 } 00:49:19.492 ] 00:49:19.492 }, 00:49:19.492 { 00:49:19.492 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:49:19.492 "subtype": "NVMe", 00:49:19.492 "listen_addresses": [ 00:49:19.492 { 00:49:19.492 "trtype": "VFIOUSER", 00:49:19.492 "adrfam": "IPv4", 00:49:19.492 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:49:19.492 "trsvcid": "0" 00:49:19.492 } 00:49:19.492 ], 00:49:19.492 "allow_any_host": true, 00:49:19.492 "hosts": [], 00:49:19.492 "serial_number": "SPDK2", 00:49:19.492 "model_number": "SPDK bdev Controller", 00:49:19.492 "max_namespaces": 32, 00:49:19.492 "min_cntlid": 1, 00:49:19.492 "max_cntlid": 65519, 00:49:19.492 "namespaces": [ 00:49:19.492 { 00:49:19.492 "nsid": 1, 00:49:19.492 "bdev_name": "Malloc2", 00:49:19.492 "name": "Malloc2", 00:49:19.492 "nguid": "BEDB6F0E16E845E99EA2BC487138BC6D", 00:49:19.492 "uuid": "bedb6f0e-16e8-45e9-9ea2-bc487138bc6d" 00:49:19.492 }, 00:49:19.492 { 00:49:19.492 "nsid": 2, 00:49:19.492 "bdev_name": "Malloc4", 00:49:19.492 "name": "Malloc4", 00:49:19.492 "nguid": "583ECCE87BB944008AA6CF69ACA9B7CB", 00:49:19.492 "uuid": "583ecce8-7bb9-4400-8aa6-cf69aca9b7cb" 00:49:19.492 } 00:49:19.492 ] 00:49:19.492 } 00:49:19.492 ] 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2382531 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2375559 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2375559 ']' 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2375559 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2375559 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2375559' 00:49:19.492 killing process with pid 2375559 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2375559 00:49:19.492 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2375559 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2382730 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2382730' 00:49:19.753 Process pid: 2382730 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2382730 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2382730 ']' 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:19.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:49:19.753 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:49:19.753 [2024-12-09 11:00:20.898175] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:49:19.753 [2024-12-09 11:00:20.899583] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:49:19.753 [2024-12-09 11:00:20.899643] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:20.014 [2024-12-09 11:00:21.028330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:20.014 [2024-12-09 11:00:21.079581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:20.014 [2024-12-09 11:00:21.079631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:20.014 [2024-12-09 11:00:21.079653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:20.014 [2024-12-09 11:00:21.079667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:20.014 [2024-12-09 11:00:21.079679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:20.014 [2024-12-09 11:00:21.081543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:20.014 [2024-12-09 11:00:21.081633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:20.014 [2024-12-09 11:00:21.081736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:49:20.014 [2024-12-09 11:00:21.081741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:20.014 [2024-12-09 11:00:21.159935] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:49:20.015 [2024-12-09 11:00:21.160141] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:49:20.015 [2024-12-09 11:00:21.160291] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:49:20.015 [2024-12-09 11:00:21.160745] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:49:20.015 [2024-12-09 11:00:21.161000] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:49:20.015 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:20.015 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:49:20.015 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:49:21.401 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:49:21.401 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:49:21.401 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:49:21.401 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:49:21.401 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:49:21.401 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:49:21.662 Malloc1 00:49:21.662 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:49:21.923 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:49:22.184 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:49:22.445 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:49:22.445 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:49:22.445 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:49:22.708 Malloc2 00:49:22.708 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:49:22.969 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:49:22.969 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2382730 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2382730 ']' 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2382730 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2382730 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2382730' 00:49:23.230 killing process with pid 2382730 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2382730 00:49:23.230 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2382730 00:49:23.490 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:49:23.490 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:49:23.490 00:49:23.490 real 0m55.521s 00:49:23.490 user 3m33.587s 00:49:23.490 sys 0m4.283s 00:49:23.490 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:23.490 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:49:23.490 ************************************ 00:49:23.490 END TEST nvmf_vfio_user 00:49:23.490 ************************************ 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:49:23.751 ************************************ 00:49:23.751 START TEST nvmf_vfio_user_nvme_compliance 00:49:23.751 ************************************ 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:49:23.751 * Looking for test storage... 00:49:23.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:49:23.751 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:24.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:24.013 --rc genhtml_branch_coverage=1 00:49:24.013 --rc genhtml_function_coverage=1 00:49:24.013 --rc genhtml_legend=1 00:49:24.013 --rc geninfo_all_blocks=1 00:49:24.013 --rc geninfo_unexecuted_blocks=1 00:49:24.013 00:49:24.013 ' 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:24.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:24.013 --rc genhtml_branch_coverage=1 00:49:24.013 --rc genhtml_function_coverage=1 00:49:24.013 --rc genhtml_legend=1 00:49:24.013 --rc geninfo_all_blocks=1 00:49:24.013 --rc geninfo_unexecuted_blocks=1 00:49:24.013 00:49:24.013 ' 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:24.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:24.013 --rc genhtml_branch_coverage=1 00:49:24.013 --rc genhtml_function_coverage=1 00:49:24.013 --rc genhtml_legend=1 00:49:24.013 --rc geninfo_all_blocks=1 00:49:24.013 --rc geninfo_unexecuted_blocks=1 00:49:24.013 00:49:24.013 ' 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:24.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:24.013 --rc genhtml_branch_coverage=1 00:49:24.013 --rc genhtml_function_coverage=1 00:49:24.013 --rc genhtml_legend=1 00:49:24.013 --rc geninfo_all_blocks=1 00:49:24.013 --rc geninfo_unexecuted_blocks=1 00:49:24.013 00:49:24.013 ' 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:24.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2383337 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2383337' 00:49:24.013 Process pid: 2383337 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:49:24.013 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2383337 00:49:24.014 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:49:24.014 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2383337 ']' 00:49:24.014 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:24.014 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:24.014 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:24.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:24.014 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:24.014 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:49:24.014 [2024-12-09 11:00:25.036852] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:49:24.014 [2024-12-09 11:00:25.036939] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:24.014 [2024-12-09 11:00:25.167438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:49:24.273 [2024-12-09 11:00:25.221426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:24.273 [2024-12-09 11:00:25.221478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:24.273 [2024-12-09 11:00:25.221493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:24.273 [2024-12-09 11:00:25.221507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:24.273 [2024-12-09 11:00:25.221519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:24.273 [2024-12-09 11:00:25.223134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:24.273 [2024-12-09 11:00:25.223224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:24.273 [2024-12-09 11:00:25.223228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:24.273 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:24.273 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:49:24.273 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:49:25.215 malloc0 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:49:25.475 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.475 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:49:25.475 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.475 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:49:25.475 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.475 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:49:25.475 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.475 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:49:25.475 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.475 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:49:25.475 00:49:25.475 00:49:25.475 CUnit - A unit testing framework for C - Version 2.1-3 00:49:25.475 http://cunit.sourceforge.net/ 00:49:25.475 00:49:25.475 00:49:25.475 Suite: nvme_compliance 00:49:25.735 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 11:00:26.672210] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:25.735 [2024-12-09 11:00:26.673691] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:49:25.735 [2024-12-09 11:00:26.673712] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:49:25.735 [2024-12-09 11:00:26.673721] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:49:25.735 [2024-12-09 11:00:26.678245] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:25.735 passed 00:49:25.735 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 11:00:26.781016] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:25.735 [2024-12-09 11:00:26.786045] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:25.735 passed 00:49:25.735 Test: admin_identify_ns ...[2024-12-09 11:00:26.895473] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:25.995 [2024-12-09 11:00:26.955663] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:49:25.995 [2024-12-09 11:00:26.963662] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:49:25.995 [2024-12-09 11:00:26.984797] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:25.995 passed 00:49:25.995 Test: admin_get_features_mandatory_features ...[2024-12-09 11:00:27.083914] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:25.995 [2024-12-09 11:00:27.088948] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:25.995 passed 00:49:26.255 Test: admin_get_features_optional_features ...[2024-12-09 11:00:27.192588] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:26.255 [2024-12-09 11:00:27.195611] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:26.255 passed 00:49:26.255 Test: admin_set_features_number_of_queues ...[2024-12-09 11:00:27.298767] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:26.255 [2024-12-09 11:00:27.405807] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:26.516 passed 00:49:26.516 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 11:00:27.506965] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:26.516 [2024-12-09 11:00:27.509983] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:26.516 passed 00:49:26.516 Test: admin_get_log_page_with_lpo ...[2024-12-09 11:00:27.613367] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:26.516 [2024-12-09 11:00:27.681663] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:49:26.776 [2024-12-09 11:00:27.694748] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:26.776 passed 00:49:26.776 Test: fabric_property_get ...[2024-12-09 11:00:27.797924] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:26.776 [2024-12-09 11:00:27.799247] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:49:26.776 [2024-12-09 11:00:27.801953] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:26.776 passed 00:49:26.776 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 11:00:27.906664] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:26.776 [2024-12-09 11:00:27.907983] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:49:26.776 [2024-12-09 11:00:27.909674] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:27.038 passed 00:49:27.038 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 11:00:28.013762] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:27.038 [2024-12-09 11:00:28.099660] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:49:27.038 [2024-12-09 11:00:28.115671] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:49:27.038 [2024-12-09 11:00:28.120770] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:27.038 passed 00:49:27.299 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 11:00:28.223876] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:27.299 [2024-12-09 11:00:28.225197] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:49:27.299 [2024-12-09 11:00:28.226894] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:27.299 passed 00:49:27.299 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 11:00:28.329489] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:27.299 [2024-12-09 11:00:28.405654] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:49:27.299 [2024-12-09 11:00:28.429662] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:49:27.299 [2024-12-09 11:00:28.433942] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:27.559 passed 00:49:27.559 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 11:00:28.535100] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:27.559 [2024-12-09 11:00:28.536410] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:49:27.559 [2024-12-09 11:00:28.536447] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:49:27.559 [2024-12-09 11:00:28.540129] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:27.559 passed 00:49:27.559 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 11:00:28.642544] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:27.820 [2024-12-09 11:00:28.735672] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:49:27.820 [2024-12-09 11:00:28.743657] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:49:27.820 [2024-12-09 11:00:28.751655] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:49:27.820 [2024-12-09 11:00:28.759653] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:49:27.820 [2024-12-09 11:00:28.788765] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:27.820 passed 00:49:27.820 Test: admin_create_io_sq_verify_pc ...[2024-12-09 11:00:28.891861] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:27.820 [2024-12-09 11:00:28.905671] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:49:27.820 [2024-12-09 11:00:28.926267] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:27.820 passed 00:49:28.080 Test: admin_create_io_qp_max_qps ...[2024-12-09 11:00:29.028947] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:29.023 [2024-12-09 11:00:30.119666] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:49:29.594 [2024-12-09 11:00:30.500078] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:29.594 passed 00:49:29.594 Test: admin_create_io_sq_shared_cq ...[2024-12-09 11:00:30.604779] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:49:29.594 [2024-12-09 11:00:30.734655] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:49:29.854 [2024-12-09 11:00:30.771749] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:49:29.854 passed 00:49:29.854 00:49:29.854 Run Summary: Type Total Ran Passed Failed Inactive 00:49:29.854 suites 1 1 n/a 0 0 00:49:29.854 tests 18 18 18 0 0 00:49:29.854 asserts 360 360 360 0 n/a 00:49:29.854 00:49:29.854 Elapsed time = 1.746 seconds 00:49:29.854 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2383337 00:49:29.854 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2383337 ']' 00:49:29.854 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2383337 00:49:29.854 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:49:29.854 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:29.854 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2383337 00:49:29.854 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:29.854 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:29.854 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2383337' 00:49:29.854 killing process with pid 2383337 00:49:29.854 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2383337 00:49:29.855 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2383337 00:49:30.115 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:49:30.115 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:49:30.115 00:49:30.116 real 0m6.445s 00:49:30.116 user 0m17.631s 00:49:30.116 sys 0m0.773s 00:49:30.116 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:30.116 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:49:30.116 ************************************ 00:49:30.116 END TEST nvmf_vfio_user_nvme_compliance 00:49:30.116 ************************************ 00:49:30.116 11:00:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:49:30.116 11:00:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:30.116 11:00:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:30.116 11:00:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:49:30.116 ************************************ 00:49:30.116 START TEST nvmf_vfio_user_fuzz 00:49:30.116 ************************************ 00:49:30.116 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:49:30.377 * Looking for test storage... 00:49:30.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:49:30.377 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:30.377 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:49:30.377 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:30.377 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:30.377 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:30.377 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:30.377 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:30.377 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:49:30.377 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:49:30.377 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:30.378 --rc genhtml_branch_coverage=1 00:49:30.378 --rc genhtml_function_coverage=1 00:49:30.378 --rc genhtml_legend=1 00:49:30.378 --rc geninfo_all_blocks=1 00:49:30.378 --rc geninfo_unexecuted_blocks=1 00:49:30.378 00:49:30.378 ' 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:30.378 --rc genhtml_branch_coverage=1 00:49:30.378 --rc genhtml_function_coverage=1 00:49:30.378 --rc genhtml_legend=1 00:49:30.378 --rc geninfo_all_blocks=1 00:49:30.378 --rc geninfo_unexecuted_blocks=1 00:49:30.378 00:49:30.378 ' 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:30.378 --rc genhtml_branch_coverage=1 00:49:30.378 --rc genhtml_function_coverage=1 00:49:30.378 --rc genhtml_legend=1 00:49:30.378 --rc geninfo_all_blocks=1 00:49:30.378 --rc geninfo_unexecuted_blocks=1 00:49:30.378 00:49:30.378 ' 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:30.378 --rc genhtml_branch_coverage=1 00:49:30.378 --rc genhtml_function_coverage=1 00:49:30.378 --rc genhtml_legend=1 00:49:30.378 --rc geninfo_all_blocks=1 00:49:30.378 --rc geninfo_unexecuted_blocks=1 00:49:30.378 00:49:30.378 ' 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:30.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2384283 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2384283' 00:49:30.378 Process pid: 2384283 00:49:30.378 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:49:30.379 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2384283 00:49:30.379 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:49:30.379 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2384283 ']' 00:49:30.379 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:30.379 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:30.379 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:30.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:30.379 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:30.379 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:49:31.320 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:31.320 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:49:31.320 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:49:32.704 malloc0 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:49:32.704 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:50:04.827 Fuzzing completed. Shutting down the fuzz application 00:50:04.827 00:50:04.827 Dumping successful admin opcodes: 00:50:04.827 9, 10, 00:50:04.827 Dumping successful io opcodes: 00:50:04.827 0, 00:50:04.827 NS: 0x20000081ef00 I/O qp, Total commands completed: 616825, total successful commands: 2384, random_seed: 2971670144 00:50:04.827 NS: 0x20000081ef00 admin qp, Total commands completed: 78256, total successful commands: 16, random_seed: 3020033472 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2384283 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2384283 ']' 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2384283 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2384283 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2384283' 00:50:04.827 killing process with pid 2384283 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2384283 00:50:04.827 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2384283 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:50:04.827 00:50:04.827 real 0m34.070s 00:50:04.827 user 0m36.906s 00:50:04.827 sys 0m27.194s 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:50:04.827 ************************************ 00:50:04.827 END TEST nvmf_vfio_user_fuzz 00:50:04.827 ************************************ 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:50:04.827 ************************************ 00:50:04.827 START TEST nvmf_auth_target 00:50:04.827 ************************************ 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:50:04.827 * Looking for test storage... 00:50:04.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:50:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:04.827 --rc genhtml_branch_coverage=1 00:50:04.827 --rc genhtml_function_coverage=1 00:50:04.827 --rc genhtml_legend=1 00:50:04.827 --rc geninfo_all_blocks=1 00:50:04.827 --rc geninfo_unexecuted_blocks=1 00:50:04.827 00:50:04.827 ' 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:50:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:04.827 --rc genhtml_branch_coverage=1 00:50:04.827 --rc genhtml_function_coverage=1 00:50:04.827 --rc genhtml_legend=1 00:50:04.827 --rc geninfo_all_blocks=1 00:50:04.827 --rc geninfo_unexecuted_blocks=1 00:50:04.827 00:50:04.827 ' 00:50:04.827 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:50:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:04.827 --rc genhtml_branch_coverage=1 00:50:04.827 --rc genhtml_function_coverage=1 00:50:04.827 --rc genhtml_legend=1 00:50:04.828 --rc geninfo_all_blocks=1 00:50:04.828 --rc geninfo_unexecuted_blocks=1 00:50:04.828 00:50:04.828 ' 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:50:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:04.828 --rc genhtml_branch_coverage=1 00:50:04.828 --rc genhtml_function_coverage=1 00:50:04.828 --rc genhtml_legend=1 00:50:04.828 --rc geninfo_all_blocks=1 00:50:04.828 --rc geninfo_unexecuted_blocks=1 00:50:04.828 00:50:04.828 ' 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:50:04.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:50:04.828 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:11.419 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:50:11.419 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:50:11.419 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:50:11.419 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:50:11.419 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:50:11.419 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:50:11.419 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:50:11.419 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:50:11.419 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:50:11.419 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:50:11.420 Found 0000:af:00.0 (0x8086 - 0x159b) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:50:11.420 Found 0000:af:00.1 (0x8086 - 0x159b) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:50:11.420 Found net devices under 0000:af:00.0: cvl_0_0 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:50:11.420 Found net devices under 0000:af:00.1: cvl_0_1 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:50:11.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:11.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:50:11.420 00:50:11.420 --- 10.0.0.2 ping statistics --- 00:50:11.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:11.420 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:50:11.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:11.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:50:11.420 00:50:11.420 --- 10.0.0.1 ping statistics --- 00:50:11.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:11.420 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:50:11.420 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:11.421 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:11.421 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2391362 00:50:11.421 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:50:11.421 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2391362 00:50:11.421 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2391362 ']' 00:50:11.421 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:11.421 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:11.421 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:11.421 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:11.421 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2391556 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:50:11.682 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:50:11.683 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a1a543337f4ba521f79b8838277a965e4677282f0b9ea932 00:50:11.683 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rDU 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a1a543337f4ba521f79b8838277a965e4677282f0b9ea932 0 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a1a543337f4ba521f79b8838277a965e4677282f0b9ea932 0 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a1a543337f4ba521f79b8838277a965e4677282f0b9ea932 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rDU 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rDU 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.rDU 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e6b216eb9631f9ddf4baf81c9edb27df6ba8f9de7d8b5b22dd63b2837d042a86 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PKI 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e6b216eb9631f9ddf4baf81c9edb27df6ba8f9de7d8b5b22dd63b2837d042a86 3 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e6b216eb9631f9ddf4baf81c9edb27df6ba8f9de7d8b5b22dd63b2837d042a86 3 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e6b216eb9631f9ddf4baf81c9edb27df6ba8f9de7d8b5b22dd63b2837d042a86 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PKI 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PKI 00:50:11.944 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.PKI 00:50:11.945 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:50:11.945 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:50:11.945 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:11.945 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:50:11.945 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:50:11.945 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:50:11.945 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=25d78d6114077b7d5e86ff04e80de3f6 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xck 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 25d78d6114077b7d5e86ff04e80de3f6 1 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 25d78d6114077b7d5e86ff04e80de3f6 1 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=25d78d6114077b7d5e86ff04e80de3f6 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xck 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xck 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.xck 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=53cfdb3f68423463a504ed01b068295132650548a9879313 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6kc 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 53cfdb3f68423463a504ed01b068295132650548a9879313 2 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 53cfdb3f68423463a504ed01b068295132650548a9879313 2 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=53cfdb3f68423463a504ed01b068295132650548a9879313 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:50:11.945 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6kc 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6kc 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.6kc 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4efd8d58ee57cc110e8b0ac8cca7b92e944560870e306d2c 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.8fz 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4efd8d58ee57cc110e8b0ac8cca7b92e944560870e306d2c 2 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4efd8d58ee57cc110e8b0ac8cca7b92e944560870e306d2c 2 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4efd8d58ee57cc110e8b0ac8cca7b92e944560870e306d2c 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.8fz 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.8fz 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.8fz 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fd44ad1895caf9a87d0db02e355b0dd2 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0xq 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fd44ad1895caf9a87d0db02e355b0dd2 1 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fd44ad1895caf9a87d0db02e355b0dd2 1 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fd44ad1895caf9a87d0db02e355b0dd2 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0xq 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0xq 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.0xq 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d1bdbf12102eba5ac8279eb9980f267dc97ea145c818533e4e49bb72191827a6 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DNF 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d1bdbf12102eba5ac8279eb9980f267dc97ea145c818533e4e49bb72191827a6 3 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d1bdbf12102eba5ac8279eb9980f267dc97ea145c818533e4e49bb72191827a6 3 00:50:12.206 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:50:12.207 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:50:12.207 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d1bdbf12102eba5ac8279eb9980f267dc97ea145c818533e4e49bb72191827a6 00:50:12.207 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:50:12.207 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DNF 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DNF 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.DNF 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2391362 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2391362 ']' 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:12.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:12.467 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:12.728 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:12.728 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:50:12.728 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2391556 /var/tmp/host.sock 00:50:12.728 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2391556 ']' 00:50:12.728 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:50:12.728 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:12.728 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:50:12.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:50:12.728 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:12.728 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:12.987 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:12.987 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:50:12.987 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:50:12.987 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:12.987 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:12.987 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:12.987 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:50:12.987 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rDU 00:50:12.987 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:12.987 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:12.987 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:12.987 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rDU 00:50:12.988 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rDU 00:50:13.248 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.PKI ]] 00:50:13.248 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PKI 00:50:13.248 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:13.248 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:13.248 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:13.248 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PKI 00:50:13.248 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PKI 00:50:13.508 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:50:13.508 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xck 00:50:13.508 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:13.508 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:13.508 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:13.508 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.xck 00:50:13.508 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.xck 00:50:13.769 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.6kc ]] 00:50:13.769 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6kc 00:50:13.769 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:13.769 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:13.769 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:13.769 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6kc 00:50:13.769 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6kc 00:50:14.030 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:50:14.030 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8fz 00:50:14.030 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:14.030 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:14.030 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:14.030 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.8fz 00:50:14.030 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.8fz 00:50:14.291 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.0xq ]] 00:50:14.291 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0xq 00:50:14.291 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:14.291 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:14.291 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:14.291 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0xq 00:50:14.291 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0xq 00:50:14.552 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:50:14.552 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DNF 00:50:14.552 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:14.552 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:14.552 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:14.552 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.DNF 00:50:14.552 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.DNF 00:50:14.812 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:50:14.812 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:50:14.812 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:14.812 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:14.812 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:50:14.812 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:15.384 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:15.645 00:50:15.645 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:15.645 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:15.645 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:15.906 { 00:50:15.906 "cntlid": 1, 00:50:15.906 "qid": 0, 00:50:15.906 "state": "enabled", 00:50:15.906 "thread": "nvmf_tgt_poll_group_000", 00:50:15.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:15.906 "listen_address": { 00:50:15.906 "trtype": "TCP", 00:50:15.906 "adrfam": "IPv4", 00:50:15.906 "traddr": "10.0.0.2", 00:50:15.906 "trsvcid": "4420" 00:50:15.906 }, 00:50:15.906 "peer_address": { 00:50:15.906 "trtype": "TCP", 00:50:15.906 "adrfam": "IPv4", 00:50:15.906 "traddr": "10.0.0.1", 00:50:15.906 "trsvcid": "49126" 00:50:15.906 }, 00:50:15.906 "auth": { 00:50:15.906 "state": "completed", 00:50:15.906 "digest": "sha256", 00:50:15.906 "dhgroup": "null" 00:50:15.906 } 00:50:15.906 } 00:50:15.906 ]' 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:50:15.906 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:15.906 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:15.906 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:15.906 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:16.168 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:50:16.168 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:50:20.376 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:20.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:20.377 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:20.377 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:20.377 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:20.377 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:20.637 00:50:20.637 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:20.637 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:20.637 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:20.912 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:20.912 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:20.913 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:20.913 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:20.913 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:20.913 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:20.913 { 00:50:20.913 "cntlid": 3, 00:50:20.913 "qid": 0, 00:50:20.913 "state": "enabled", 00:50:20.913 "thread": "nvmf_tgt_poll_group_000", 00:50:20.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:20.913 "listen_address": { 00:50:20.913 "trtype": "TCP", 00:50:20.913 "adrfam": "IPv4", 00:50:20.913 "traddr": "10.0.0.2", 00:50:20.913 "trsvcid": "4420" 00:50:20.913 }, 00:50:20.913 "peer_address": { 00:50:20.913 "trtype": "TCP", 00:50:20.913 "adrfam": "IPv4", 00:50:20.913 "traddr": "10.0.0.1", 00:50:20.913 "trsvcid": "49156" 00:50:20.913 }, 00:50:20.913 "auth": { 00:50:20.913 "state": "completed", 00:50:20.913 "digest": "sha256", 00:50:20.913 "dhgroup": "null" 00:50:20.913 } 00:50:20.913 } 00:50:20.913 ]' 00:50:20.913 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:20.913 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:20.913 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:20.913 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:50:20.913 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:20.913 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:20.913 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:20.913 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:21.174 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:50:21.174 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:50:22.117 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:22.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:22.117 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:22.117 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:22.117 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:22.117 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:22.117 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:22.117 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:50:22.117 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:22.379 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:22.640 00:50:22.640 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:22.640 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:22.640 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:22.901 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:22.901 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:22.901 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:22.901 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:23.162 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.162 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:23.162 { 00:50:23.162 "cntlid": 5, 00:50:23.162 "qid": 0, 00:50:23.162 "state": "enabled", 00:50:23.162 "thread": "nvmf_tgt_poll_group_000", 00:50:23.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:23.162 "listen_address": { 00:50:23.162 "trtype": "TCP", 00:50:23.162 "adrfam": "IPv4", 00:50:23.162 "traddr": "10.0.0.2", 00:50:23.162 "trsvcid": "4420" 00:50:23.162 }, 00:50:23.162 "peer_address": { 00:50:23.162 "trtype": "TCP", 00:50:23.162 "adrfam": "IPv4", 00:50:23.162 "traddr": "10.0.0.1", 00:50:23.162 "trsvcid": "49172" 00:50:23.162 }, 00:50:23.162 "auth": { 00:50:23.162 "state": "completed", 00:50:23.162 "digest": "sha256", 00:50:23.162 "dhgroup": "null" 00:50:23.162 } 00:50:23.162 } 00:50:23.162 ]' 00:50:23.162 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:23.162 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:23.162 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:23.162 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:50:23.162 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:23.162 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:23.162 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:23.162 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:23.422 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:50:23.422 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:50:24.365 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:24.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:24.365 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:24.365 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:24.365 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:24.365 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:24.365 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:24.365 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:50:24.365 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:50:24.625 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:50:24.625 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:24.625 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:24.625 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:50:24.625 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:24.625 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:24.625 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:50:24.625 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:24.625 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:24.626 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:24.626 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:24.626 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:24.626 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:24.886 00:50:24.886 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:24.887 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:24.887 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:25.147 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:25.147 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:25.147 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:25.147 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:25.148 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:25.148 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:25.148 { 00:50:25.148 "cntlid": 7, 00:50:25.148 "qid": 0, 00:50:25.148 "state": "enabled", 00:50:25.148 "thread": "nvmf_tgt_poll_group_000", 00:50:25.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:25.148 "listen_address": { 00:50:25.148 "trtype": "TCP", 00:50:25.148 "adrfam": "IPv4", 00:50:25.148 "traddr": "10.0.0.2", 00:50:25.148 "trsvcid": "4420" 00:50:25.148 }, 00:50:25.148 "peer_address": { 00:50:25.148 "trtype": "TCP", 00:50:25.148 "adrfam": "IPv4", 00:50:25.148 "traddr": "10.0.0.1", 00:50:25.148 "trsvcid": "41330" 00:50:25.148 }, 00:50:25.148 "auth": { 00:50:25.148 "state": "completed", 00:50:25.148 "digest": "sha256", 00:50:25.148 "dhgroup": "null" 00:50:25.148 } 00:50:25.148 } 00:50:25.148 ]' 00:50:25.148 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:25.148 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:25.148 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:25.408 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:50:25.408 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:25.408 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:25.408 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:25.408 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:25.669 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:50:25.669 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:26.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:26.613 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:26.874 00:50:26.874 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:26.874 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:26.874 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:27.135 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:27.135 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:27.135 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:27.135 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:27.396 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:27.396 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:27.396 { 00:50:27.396 "cntlid": 9, 00:50:27.396 "qid": 0, 00:50:27.396 "state": "enabled", 00:50:27.396 "thread": "nvmf_tgt_poll_group_000", 00:50:27.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:27.396 "listen_address": { 00:50:27.396 "trtype": "TCP", 00:50:27.396 "adrfam": "IPv4", 00:50:27.396 "traddr": "10.0.0.2", 00:50:27.396 "trsvcid": "4420" 00:50:27.396 }, 00:50:27.396 "peer_address": { 00:50:27.396 "trtype": "TCP", 00:50:27.396 "adrfam": "IPv4", 00:50:27.396 "traddr": "10.0.0.1", 00:50:27.396 "trsvcid": "41350" 00:50:27.396 }, 00:50:27.396 "auth": { 00:50:27.396 "state": "completed", 00:50:27.396 "digest": "sha256", 00:50:27.396 "dhgroup": "ffdhe2048" 00:50:27.396 } 00:50:27.396 } 00:50:27.396 ]' 00:50:27.396 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:27.396 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:27.397 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:27.397 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:50:27.397 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:27.397 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:27.397 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:27.397 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:27.657 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:50:27.657 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:28.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:28.599 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:28.600 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:28.600 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:28.600 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:28.600 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:28.600 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:28.600 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:28.600 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:29.170 00:50:29.170 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:29.170 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:29.170 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:29.432 { 00:50:29.432 "cntlid": 11, 00:50:29.432 "qid": 0, 00:50:29.432 "state": "enabled", 00:50:29.432 "thread": "nvmf_tgt_poll_group_000", 00:50:29.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:29.432 "listen_address": { 00:50:29.432 "trtype": "TCP", 00:50:29.432 "adrfam": "IPv4", 00:50:29.432 "traddr": "10.0.0.2", 00:50:29.432 "trsvcid": "4420" 00:50:29.432 }, 00:50:29.432 "peer_address": { 00:50:29.432 "trtype": "TCP", 00:50:29.432 "adrfam": "IPv4", 00:50:29.432 "traddr": "10.0.0.1", 00:50:29.432 "trsvcid": "41364" 00:50:29.432 }, 00:50:29.432 "auth": { 00:50:29.432 "state": "completed", 00:50:29.432 "digest": "sha256", 00:50:29.432 "dhgroup": "ffdhe2048" 00:50:29.432 } 00:50:29.432 } 00:50:29.432 ]' 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:29.432 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:29.693 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:50:29.693 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:50:30.635 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:30.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:30.635 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:30.635 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:30.635 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:30.636 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:30.636 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:30.636 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:30.636 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:30.896 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:50:30.896 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:30.896 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:30.896 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:50:30.896 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:30.896 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:30.896 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:30.896 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:30.896 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:30.896 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:30.897 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:30.897 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:30.897 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:31.157 00:50:31.157 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:31.157 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:31.157 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:31.418 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:31.418 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:31.418 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:31.418 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:31.418 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:31.418 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:31.418 { 00:50:31.418 "cntlid": 13, 00:50:31.418 "qid": 0, 00:50:31.418 "state": "enabled", 00:50:31.418 "thread": "nvmf_tgt_poll_group_000", 00:50:31.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:31.418 "listen_address": { 00:50:31.418 "trtype": "TCP", 00:50:31.418 "adrfam": "IPv4", 00:50:31.418 "traddr": "10.0.0.2", 00:50:31.418 "trsvcid": "4420" 00:50:31.418 }, 00:50:31.418 "peer_address": { 00:50:31.418 "trtype": "TCP", 00:50:31.418 "adrfam": "IPv4", 00:50:31.418 "traddr": "10.0.0.1", 00:50:31.418 "trsvcid": "41400" 00:50:31.418 }, 00:50:31.418 "auth": { 00:50:31.418 "state": "completed", 00:50:31.418 "digest": "sha256", 00:50:31.418 "dhgroup": "ffdhe2048" 00:50:31.418 } 00:50:31.418 } 00:50:31.418 ]' 00:50:31.418 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:31.418 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:31.418 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:31.679 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:50:31.679 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:31.679 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:31.679 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:31.679 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:31.941 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:50:31.941 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:50:32.882 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:32.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:32.882 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:32.882 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:32.882 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:32.882 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:32.882 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:32.883 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:32.883 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:32.883 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:50:32.883 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:32.883 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:32.883 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:50:32.883 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:32.883 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:32.883 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:50:32.883 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:32.883 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:33.142 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:33.143 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:33.143 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:33.143 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:33.143 00:50:33.403 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:33.403 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:33.403 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:33.664 { 00:50:33.664 "cntlid": 15, 00:50:33.664 "qid": 0, 00:50:33.664 "state": "enabled", 00:50:33.664 "thread": "nvmf_tgt_poll_group_000", 00:50:33.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:33.664 "listen_address": { 00:50:33.664 "trtype": "TCP", 00:50:33.664 "adrfam": "IPv4", 00:50:33.664 "traddr": "10.0.0.2", 00:50:33.664 "trsvcid": "4420" 00:50:33.664 }, 00:50:33.664 "peer_address": { 00:50:33.664 "trtype": "TCP", 00:50:33.664 "adrfam": "IPv4", 00:50:33.664 "traddr": "10.0.0.1", 00:50:33.664 "trsvcid": "41408" 00:50:33.664 }, 00:50:33.664 "auth": { 00:50:33.664 "state": "completed", 00:50:33.664 "digest": "sha256", 00:50:33.664 "dhgroup": "ffdhe2048" 00:50:33.664 } 00:50:33.664 } 00:50:33.664 ]' 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:50:33.664 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:33.665 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:33.665 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:33.665 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:33.925 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:50:33.925 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:50:34.865 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:34.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:34.865 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:34.865 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:34.865 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:34.865 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:34.865 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:34.865 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:34.865 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:34.865 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:35.126 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:35.387 00:50:35.387 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:35.387 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:35.387 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:35.647 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:35.647 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:35.647 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:35.647 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:35.647 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:35.647 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:35.647 { 00:50:35.647 "cntlid": 17, 00:50:35.647 "qid": 0, 00:50:35.647 "state": "enabled", 00:50:35.647 "thread": "nvmf_tgt_poll_group_000", 00:50:35.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:35.647 "listen_address": { 00:50:35.647 "trtype": "TCP", 00:50:35.647 "adrfam": "IPv4", 00:50:35.647 "traddr": "10.0.0.2", 00:50:35.647 "trsvcid": "4420" 00:50:35.647 }, 00:50:35.647 "peer_address": { 00:50:35.647 "trtype": "TCP", 00:50:35.647 "adrfam": "IPv4", 00:50:35.647 "traddr": "10.0.0.1", 00:50:35.647 "trsvcid": "48824" 00:50:35.647 }, 00:50:35.647 "auth": { 00:50:35.647 "state": "completed", 00:50:35.647 "digest": "sha256", 00:50:35.647 "dhgroup": "ffdhe3072" 00:50:35.647 } 00:50:35.647 } 00:50:35.647 ]' 00:50:35.647 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:35.647 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:35.647 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:35.908 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:50:35.908 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:35.908 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:35.908 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:35.908 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:36.168 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:50:36.168 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:50:36.738 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:36.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:36.738 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:36.738 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:36.738 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:36.999 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:36.999 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:36.999 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:36.999 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:37.260 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:37.520 00:50:37.520 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:37.520 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:37.520 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:37.781 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:37.781 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:37.781 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:37.781 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:37.781 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:37.781 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:37.781 { 00:50:37.781 "cntlid": 19, 00:50:37.781 "qid": 0, 00:50:37.781 "state": "enabled", 00:50:37.781 "thread": "nvmf_tgt_poll_group_000", 00:50:37.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:37.781 "listen_address": { 00:50:37.781 "trtype": "TCP", 00:50:37.781 "adrfam": "IPv4", 00:50:37.781 "traddr": "10.0.0.2", 00:50:37.781 "trsvcid": "4420" 00:50:37.781 }, 00:50:37.781 "peer_address": { 00:50:37.781 "trtype": "TCP", 00:50:37.781 "adrfam": "IPv4", 00:50:37.781 "traddr": "10.0.0.1", 00:50:37.781 "trsvcid": "48854" 00:50:37.781 }, 00:50:37.781 "auth": { 00:50:37.781 "state": "completed", 00:50:37.781 "digest": "sha256", 00:50:37.781 "dhgroup": "ffdhe3072" 00:50:37.781 } 00:50:37.781 } 00:50:37.781 ]' 00:50:37.781 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:37.781 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:37.781 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:38.041 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:50:38.041 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:38.041 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:38.041 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:38.041 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:38.302 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:50:38.302 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:50:38.874 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:38.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:38.874 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:38.874 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:38.874 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:39.135 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:39.135 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:39.135 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:39.135 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:39.396 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:39.656 00:50:39.656 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:39.656 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:39.656 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:39.916 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:39.916 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:39.916 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:39.916 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:39.916 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:39.916 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:39.916 { 00:50:39.916 "cntlid": 21, 00:50:39.916 "qid": 0, 00:50:39.916 "state": "enabled", 00:50:39.916 "thread": "nvmf_tgt_poll_group_000", 00:50:39.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:39.916 "listen_address": { 00:50:39.916 "trtype": "TCP", 00:50:39.916 "adrfam": "IPv4", 00:50:39.916 "traddr": "10.0.0.2", 00:50:39.916 "trsvcid": "4420" 00:50:39.916 }, 00:50:39.916 "peer_address": { 00:50:39.916 "trtype": "TCP", 00:50:39.916 "adrfam": "IPv4", 00:50:39.916 "traddr": "10.0.0.1", 00:50:39.916 "trsvcid": "48884" 00:50:39.916 }, 00:50:39.916 "auth": { 00:50:39.916 "state": "completed", 00:50:39.916 "digest": "sha256", 00:50:39.916 "dhgroup": "ffdhe3072" 00:50:39.916 } 00:50:39.916 } 00:50:39.916 ]' 00:50:39.916 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:39.916 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:39.916 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:40.176 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:50:40.176 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:40.177 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:40.177 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:40.177 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:40.437 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:50:40.437 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:41.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:41.377 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:41.947 00:50:41.947 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:41.947 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:41.947 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:42.235 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:42.235 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:42.235 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:42.235 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:42.235 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:42.235 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:42.235 { 00:50:42.235 "cntlid": 23, 00:50:42.235 "qid": 0, 00:50:42.235 "state": "enabled", 00:50:42.235 "thread": "nvmf_tgt_poll_group_000", 00:50:42.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:42.236 "listen_address": { 00:50:42.236 "trtype": "TCP", 00:50:42.236 "adrfam": "IPv4", 00:50:42.236 "traddr": "10.0.0.2", 00:50:42.236 "trsvcid": "4420" 00:50:42.236 }, 00:50:42.236 "peer_address": { 00:50:42.236 "trtype": "TCP", 00:50:42.236 "adrfam": "IPv4", 00:50:42.236 "traddr": "10.0.0.1", 00:50:42.236 "trsvcid": "48916" 00:50:42.236 }, 00:50:42.236 "auth": { 00:50:42.236 "state": "completed", 00:50:42.236 "digest": "sha256", 00:50:42.236 "dhgroup": "ffdhe3072" 00:50:42.236 } 00:50:42.236 } 00:50:42.236 ]' 00:50:42.236 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:42.236 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:42.236 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:42.236 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:50:42.236 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:42.236 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:42.236 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:42.236 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:42.496 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:50:42.496 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:50:43.439 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:43.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:43.439 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:43.439 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:43.439 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:43.439 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:43.439 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:43.439 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:43.439 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:43.439 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:43.700 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:43.961 00:50:43.961 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:43.961 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:43.961 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:44.221 { 00:50:44.221 "cntlid": 25, 00:50:44.221 "qid": 0, 00:50:44.221 "state": "enabled", 00:50:44.221 "thread": "nvmf_tgt_poll_group_000", 00:50:44.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:44.221 "listen_address": { 00:50:44.221 "trtype": "TCP", 00:50:44.221 "adrfam": "IPv4", 00:50:44.221 "traddr": "10.0.0.2", 00:50:44.221 "trsvcid": "4420" 00:50:44.221 }, 00:50:44.221 "peer_address": { 00:50:44.221 "trtype": "TCP", 00:50:44.221 "adrfam": "IPv4", 00:50:44.221 "traddr": "10.0.0.1", 00:50:44.221 "trsvcid": "48932" 00:50:44.221 }, 00:50:44.221 "auth": { 00:50:44.221 "state": "completed", 00:50:44.221 "digest": "sha256", 00:50:44.221 "dhgroup": "ffdhe4096" 00:50:44.221 } 00:50:44.221 } 00:50:44.221 ]' 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:44.221 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:44.481 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:50:44.481 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:45.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:45.423 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:45.684 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:45.684 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:45.684 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:45.684 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:45.944 00:50:45.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:45.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:45.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:46.206 { 00:50:46.206 "cntlid": 27, 00:50:46.206 "qid": 0, 00:50:46.206 "state": "enabled", 00:50:46.206 "thread": "nvmf_tgt_poll_group_000", 00:50:46.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:46.206 "listen_address": { 00:50:46.206 "trtype": "TCP", 00:50:46.206 "adrfam": "IPv4", 00:50:46.206 "traddr": "10.0.0.2", 00:50:46.206 "trsvcid": "4420" 00:50:46.206 }, 00:50:46.206 "peer_address": { 00:50:46.206 "trtype": "TCP", 00:50:46.206 "adrfam": "IPv4", 00:50:46.206 "traddr": "10.0.0.1", 00:50:46.206 "trsvcid": "46278" 00:50:46.206 }, 00:50:46.206 "auth": { 00:50:46.206 "state": "completed", 00:50:46.206 "digest": "sha256", 00:50:46.206 "dhgroup": "ffdhe4096" 00:50:46.206 } 00:50:46.206 } 00:50:46.206 ]' 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:46.206 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:46.777 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:50:46.777 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:50:47.348 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:47.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:47.348 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:47.348 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:47.349 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:47.349 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:47.349 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:47.349 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:47.349 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:47.610 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:47.871 00:50:47.871 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:47.871 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:47.871 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:48.131 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:48.131 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:48.131 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:48.131 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:48.400 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:48.400 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:48.400 { 00:50:48.400 "cntlid": 29, 00:50:48.400 "qid": 0, 00:50:48.401 "state": "enabled", 00:50:48.401 "thread": "nvmf_tgt_poll_group_000", 00:50:48.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:48.401 "listen_address": { 00:50:48.401 "trtype": "TCP", 00:50:48.401 "adrfam": "IPv4", 00:50:48.401 "traddr": "10.0.0.2", 00:50:48.401 "trsvcid": "4420" 00:50:48.401 }, 00:50:48.401 "peer_address": { 00:50:48.401 "trtype": "TCP", 00:50:48.401 "adrfam": "IPv4", 00:50:48.401 "traddr": "10.0.0.1", 00:50:48.401 "trsvcid": "46296" 00:50:48.401 }, 00:50:48.401 "auth": { 00:50:48.401 "state": "completed", 00:50:48.401 "digest": "sha256", 00:50:48.401 "dhgroup": "ffdhe4096" 00:50:48.401 } 00:50:48.401 } 00:50:48.401 ]' 00:50:48.401 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:48.401 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:48.401 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:48.401 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:48.401 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:48.401 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:48.401 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:48.401 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:48.660 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:50:48.660 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:49.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:49.603 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:49.864 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:49.864 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:49.864 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:49.864 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:50.124 00:50:50.124 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:50.124 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:50.124 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:50.385 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:50.385 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:50.385 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:50.385 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:50.385 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:50.385 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:50.385 { 00:50:50.385 "cntlid": 31, 00:50:50.385 "qid": 0, 00:50:50.385 "state": "enabled", 00:50:50.385 "thread": "nvmf_tgt_poll_group_000", 00:50:50.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:50.385 "listen_address": { 00:50:50.385 "trtype": "TCP", 00:50:50.385 "adrfam": "IPv4", 00:50:50.385 "traddr": "10.0.0.2", 00:50:50.385 "trsvcid": "4420" 00:50:50.385 }, 00:50:50.385 "peer_address": { 00:50:50.385 "trtype": "TCP", 00:50:50.385 "adrfam": "IPv4", 00:50:50.385 "traddr": "10.0.0.1", 00:50:50.385 "trsvcid": "46332" 00:50:50.385 }, 00:50:50.385 "auth": { 00:50:50.385 "state": "completed", 00:50:50.385 "digest": "sha256", 00:50:50.385 "dhgroup": "ffdhe4096" 00:50:50.385 } 00:50:50.385 } 00:50:50.385 ]' 00:50:50.385 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:50.385 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:50.385 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:50.645 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:50.645 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:50.645 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:50.645 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:50.645 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:50.906 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:50:50.906 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:50:51.478 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:51.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:51.478 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:51.478 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:51.478 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:51.478 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:51.478 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:51.478 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:51.478 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:51.478 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:51.739 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:52.000 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:52.000 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:52.260 00:50:52.260 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:52.260 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:52.260 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:52.521 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:52.521 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:52.521 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:52.521 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:52.521 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:52.521 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:52.521 { 00:50:52.521 "cntlid": 33, 00:50:52.521 "qid": 0, 00:50:52.521 "state": "enabled", 00:50:52.521 "thread": "nvmf_tgt_poll_group_000", 00:50:52.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:52.521 "listen_address": { 00:50:52.521 "trtype": "TCP", 00:50:52.521 "adrfam": "IPv4", 00:50:52.521 "traddr": "10.0.0.2", 00:50:52.521 "trsvcid": "4420" 00:50:52.521 }, 00:50:52.521 "peer_address": { 00:50:52.521 "trtype": "TCP", 00:50:52.521 "adrfam": "IPv4", 00:50:52.521 "traddr": "10.0.0.1", 00:50:52.521 "trsvcid": "46364" 00:50:52.521 }, 00:50:52.521 "auth": { 00:50:52.521 "state": "completed", 00:50:52.521 "digest": "sha256", 00:50:52.521 "dhgroup": "ffdhe6144" 00:50:52.521 } 00:50:52.521 } 00:50:52.521 ]' 00:50:52.521 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:52.781 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:52.781 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:52.781 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:52.781 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:52.781 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:52.781 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:52.781 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:53.042 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:50:53.042 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:50:53.985 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:53.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:53.985 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:53.985 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:53.985 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:53.985 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:53.985 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:53.985 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:53.985 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:53.985 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:54.556 00:50:54.556 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:54.556 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:54.556 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:54.817 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:54.817 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:54.817 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:54.817 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:54.817 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:54.817 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:54.817 { 00:50:54.817 "cntlid": 35, 00:50:54.817 "qid": 0, 00:50:54.817 "state": "enabled", 00:50:54.817 "thread": "nvmf_tgt_poll_group_000", 00:50:54.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:54.817 "listen_address": { 00:50:54.817 "trtype": "TCP", 00:50:54.817 "adrfam": "IPv4", 00:50:54.817 "traddr": "10.0.0.2", 00:50:54.817 "trsvcid": "4420" 00:50:54.817 }, 00:50:54.817 "peer_address": { 00:50:54.817 "trtype": "TCP", 00:50:54.817 "adrfam": "IPv4", 00:50:54.817 "traddr": "10.0.0.1", 00:50:54.817 "trsvcid": "34482" 00:50:54.817 }, 00:50:54.817 "auth": { 00:50:54.817 "state": "completed", 00:50:54.817 "digest": "sha256", 00:50:54.817 "dhgroup": "ffdhe6144" 00:50:54.817 } 00:50:54.817 } 00:50:54.817 ]' 00:50:54.817 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:54.817 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:54.817 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:55.077 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:55.077 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:55.077 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:55.077 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:55.077 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:55.336 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:50:55.336 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:50:55.904 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:55.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:55.904 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:55.904 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.904 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:56.163 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.163 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:56.163 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:56.163 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:56.424 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:56.684 00:50:56.684 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:56.684 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:56.684 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:56.944 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:57.204 { 00:50:57.204 "cntlid": 37, 00:50:57.204 "qid": 0, 00:50:57.204 "state": "enabled", 00:50:57.204 "thread": "nvmf_tgt_poll_group_000", 00:50:57.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:57.204 "listen_address": { 00:50:57.204 "trtype": "TCP", 00:50:57.204 "adrfam": "IPv4", 00:50:57.204 "traddr": "10.0.0.2", 00:50:57.204 "trsvcid": "4420" 00:50:57.204 }, 00:50:57.204 "peer_address": { 00:50:57.204 "trtype": "TCP", 00:50:57.204 "adrfam": "IPv4", 00:50:57.204 "traddr": "10.0.0.1", 00:50:57.204 "trsvcid": "34516" 00:50:57.204 }, 00:50:57.204 "auth": { 00:50:57.204 "state": "completed", 00:50:57.204 "digest": "sha256", 00:50:57.204 "dhgroup": "ffdhe6144" 00:50:57.204 } 00:50:57.204 } 00:50:57.204 ]' 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:57.204 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:57.464 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:50:57.464 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:58.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:58.519 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:58.915 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:50:58.915 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.915 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:58.915 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.915 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:58.915 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:58.915 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:58.915 00:50:58.915 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:58.915 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:58.915 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:59.186 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:59.187 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:59.187 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.187 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:59.187 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:59.187 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:59.187 { 00:50:59.187 "cntlid": 39, 00:50:59.187 "qid": 0, 00:50:59.187 "state": "enabled", 00:50:59.187 "thread": "nvmf_tgt_poll_group_000", 00:50:59.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:50:59.187 "listen_address": { 00:50:59.187 "trtype": "TCP", 00:50:59.187 "adrfam": "IPv4", 00:50:59.187 "traddr": "10.0.0.2", 00:50:59.187 "trsvcid": "4420" 00:50:59.187 }, 00:50:59.187 "peer_address": { 00:50:59.187 "trtype": "TCP", 00:50:59.187 "adrfam": "IPv4", 00:50:59.187 "traddr": "10.0.0.1", 00:50:59.187 "trsvcid": "34548" 00:50:59.187 }, 00:50:59.187 "auth": { 00:50:59.187 "state": "completed", 00:50:59.187 "digest": "sha256", 00:50:59.187 "dhgroup": "ffdhe6144" 00:50:59.187 } 00:50:59.187 } 00:50:59.187 ]' 00:50:59.187 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:59.187 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:50:59.187 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:59.480 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:59.480 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:59.480 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:59.480 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:59.480 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:59.480 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:50:59.480 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:00.462 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:00.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:00.462 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:00.462 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:00.462 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:00.462 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:00.462 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:51:00.462 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:00.462 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:00.462 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:00.730 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:01.343 00:51:01.343 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:01.343 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:01.343 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:01.624 { 00:51:01.624 "cntlid": 41, 00:51:01.624 "qid": 0, 00:51:01.624 "state": "enabled", 00:51:01.624 "thread": "nvmf_tgt_poll_group_000", 00:51:01.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:01.624 "listen_address": { 00:51:01.624 "trtype": "TCP", 00:51:01.624 "adrfam": "IPv4", 00:51:01.624 "traddr": "10.0.0.2", 00:51:01.624 "trsvcid": "4420" 00:51:01.624 }, 00:51:01.624 "peer_address": { 00:51:01.624 "trtype": "TCP", 00:51:01.624 "adrfam": "IPv4", 00:51:01.624 "traddr": "10.0.0.1", 00:51:01.624 "trsvcid": "34582" 00:51:01.624 }, 00:51:01.624 "auth": { 00:51:01.624 "state": "completed", 00:51:01.624 "digest": "sha256", 00:51:01.624 "dhgroup": "ffdhe8192" 00:51:01.624 } 00:51:01.624 } 00:51:01.624 ]' 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:01.624 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:01.886 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:01.886 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:01.886 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:02.146 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:02.146 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:02.716 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:02.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:02.716 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:02.716 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:02.716 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:02.716 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:02.716 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:02.716 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:02.716 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:02.975 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:51:02.975 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:02.975 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:51:02.975 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:02.975 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:51:02.975 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:02.975 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:02.975 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:02.975 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:02.975 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:02.976 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:02.976 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:02.976 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:03.915 00:51:03.915 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:03.915 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:03.915 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:03.915 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:03.915 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:03.915 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:03.915 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:03.915 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:03.915 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:03.915 { 00:51:03.915 "cntlid": 43, 00:51:03.915 "qid": 0, 00:51:03.915 "state": "enabled", 00:51:03.915 "thread": "nvmf_tgt_poll_group_000", 00:51:03.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:03.915 "listen_address": { 00:51:03.915 "trtype": "TCP", 00:51:03.915 "adrfam": "IPv4", 00:51:03.915 "traddr": "10.0.0.2", 00:51:03.915 "trsvcid": "4420" 00:51:03.915 }, 00:51:03.915 "peer_address": { 00:51:03.915 "trtype": "TCP", 00:51:03.915 "adrfam": "IPv4", 00:51:03.915 "traddr": "10.0.0.1", 00:51:03.915 "trsvcid": "34622" 00:51:03.915 }, 00:51:03.915 "auth": { 00:51:03.915 "state": "completed", 00:51:03.915 "digest": "sha256", 00:51:03.915 "dhgroup": "ffdhe8192" 00:51:03.915 } 00:51:03.915 } 00:51:03.915 ]' 00:51:03.915 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:04.175 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:51:04.175 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:04.175 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:04.175 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:04.175 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:04.175 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:04.175 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:04.435 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:04.435 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:05.375 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:05.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:05.375 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:05.375 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:05.375 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:05.375 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:05.375 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:05.375 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:05.375 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:05.636 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:06.206 00:51:06.206 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:06.206 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:06.206 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:06.466 { 00:51:06.466 "cntlid": 45, 00:51:06.466 "qid": 0, 00:51:06.466 "state": "enabled", 00:51:06.466 "thread": "nvmf_tgt_poll_group_000", 00:51:06.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:06.466 "listen_address": { 00:51:06.466 "trtype": "TCP", 00:51:06.466 "adrfam": "IPv4", 00:51:06.466 "traddr": "10.0.0.2", 00:51:06.466 "trsvcid": "4420" 00:51:06.466 }, 00:51:06.466 "peer_address": { 00:51:06.466 "trtype": "TCP", 00:51:06.466 "adrfam": "IPv4", 00:51:06.466 "traddr": "10.0.0.1", 00:51:06.466 "trsvcid": "50692" 00:51:06.466 }, 00:51:06.466 "auth": { 00:51:06.466 "state": "completed", 00:51:06.466 "digest": "sha256", 00:51:06.466 "dhgroup": "ffdhe8192" 00:51:06.466 } 00:51:06.466 } 00:51:06.466 ]' 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:06.466 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:07.036 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:07.036 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:07.606 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:07.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:07.606 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:07.606 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:07.606 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:07.606 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:07.606 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:07.606 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:07.606 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:07.865 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:07.866 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:08.436 00:51:08.696 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:08.696 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:08.696 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:08.956 { 00:51:08.956 "cntlid": 47, 00:51:08.956 "qid": 0, 00:51:08.956 "state": "enabled", 00:51:08.956 "thread": "nvmf_tgt_poll_group_000", 00:51:08.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:08.956 "listen_address": { 00:51:08.956 "trtype": "TCP", 00:51:08.956 "adrfam": "IPv4", 00:51:08.956 "traddr": "10.0.0.2", 00:51:08.956 "trsvcid": "4420" 00:51:08.956 }, 00:51:08.956 "peer_address": { 00:51:08.956 "trtype": "TCP", 00:51:08.956 "adrfam": "IPv4", 00:51:08.956 "traddr": "10.0.0.1", 00:51:08.956 "trsvcid": "50720" 00:51:08.956 }, 00:51:08.956 "auth": { 00:51:08.956 "state": "completed", 00:51:08.956 "digest": "sha256", 00:51:08.956 "dhgroup": "ffdhe8192" 00:51:08.956 } 00:51:08.956 } 00:51:08.956 ]' 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:08.956 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:08.956 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:08.956 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:08.956 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:09.216 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:09.216 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:10.156 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:10.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:10.156 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:10.156 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:10.156 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:10.156 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:10.156 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:51:10.156 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:51:10.156 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:10.156 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:51:10.156 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:10.416 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:10.417 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:10.677 00:51:10.677 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:10.677 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:10.677 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:10.938 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:10.938 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:10.938 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:10.938 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:10.938 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:10.938 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:10.938 { 00:51:10.938 "cntlid": 49, 00:51:10.938 "qid": 0, 00:51:10.938 "state": "enabled", 00:51:10.938 "thread": "nvmf_tgt_poll_group_000", 00:51:10.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:10.938 "listen_address": { 00:51:10.938 "trtype": "TCP", 00:51:10.938 "adrfam": "IPv4", 00:51:10.938 "traddr": "10.0.0.2", 00:51:10.938 "trsvcid": "4420" 00:51:10.938 }, 00:51:10.938 "peer_address": { 00:51:10.938 "trtype": "TCP", 00:51:10.938 "adrfam": "IPv4", 00:51:10.938 "traddr": "10.0.0.1", 00:51:10.938 "trsvcid": "50752" 00:51:10.938 }, 00:51:10.938 "auth": { 00:51:10.938 "state": "completed", 00:51:10.938 "digest": "sha384", 00:51:10.938 "dhgroup": "null" 00:51:10.938 } 00:51:10.938 } 00:51:10.938 ]' 00:51:10.938 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:11.198 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:11.198 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:11.198 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:51:11.198 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:11.198 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:11.198 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:11.198 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:11.458 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:11.458 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:12.398 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:12.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:12.398 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:12.398 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:12.398 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:12.398 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:12.398 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:12.398 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:51:12.398 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:12.685 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:12.945 00:51:12.945 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:12.945 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:12.945 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:13.204 { 00:51:13.204 "cntlid": 51, 00:51:13.204 "qid": 0, 00:51:13.204 "state": "enabled", 00:51:13.204 "thread": "nvmf_tgt_poll_group_000", 00:51:13.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:13.204 "listen_address": { 00:51:13.204 "trtype": "TCP", 00:51:13.204 "adrfam": "IPv4", 00:51:13.204 "traddr": "10.0.0.2", 00:51:13.204 "trsvcid": "4420" 00:51:13.204 }, 00:51:13.204 "peer_address": { 00:51:13.204 "trtype": "TCP", 00:51:13.204 "adrfam": "IPv4", 00:51:13.204 "traddr": "10.0.0.1", 00:51:13.204 "trsvcid": "50786" 00:51:13.204 }, 00:51:13.204 "auth": { 00:51:13.204 "state": "completed", 00:51:13.204 "digest": "sha384", 00:51:13.204 "dhgroup": "null" 00:51:13.204 } 00:51:13.204 } 00:51:13.204 ]' 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:51:13.204 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:13.463 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:13.463 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:13.463 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:13.722 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:13.722 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:14.288 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:14.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:14.288 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:14.288 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:14.288 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:14.546 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:14.546 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:14.546 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:51:14.546 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:14.805 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:15.064 00:51:15.064 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:15.064 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:15.064 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:15.322 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:15.322 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:15.322 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:15.322 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.322 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.322 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:15.322 { 00:51:15.322 "cntlid": 53, 00:51:15.322 "qid": 0, 00:51:15.322 "state": "enabled", 00:51:15.322 "thread": "nvmf_tgt_poll_group_000", 00:51:15.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:15.322 "listen_address": { 00:51:15.322 "trtype": "TCP", 00:51:15.322 "adrfam": "IPv4", 00:51:15.322 "traddr": "10.0.0.2", 00:51:15.322 "trsvcid": "4420" 00:51:15.322 }, 00:51:15.322 "peer_address": { 00:51:15.322 "trtype": "TCP", 00:51:15.322 "adrfam": "IPv4", 00:51:15.322 "traddr": "10.0.0.1", 00:51:15.322 "trsvcid": "33020" 00:51:15.322 }, 00:51:15.322 "auth": { 00:51:15.322 "state": "completed", 00:51:15.322 "digest": "sha384", 00:51:15.322 "dhgroup": "null" 00:51:15.322 } 00:51:15.322 } 00:51:15.322 ]' 00:51:15.322 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:15.322 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:15.323 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:15.580 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:51:15.580 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:15.580 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:15.580 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:15.580 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:15.580 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:15.580 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:16.518 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:16.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:16.518 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:16.518 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:16.518 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:16.518 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:16.518 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:16.518 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:51:16.518 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:51:16.777 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:16.778 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:17.037 00:51:17.037 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:17.037 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:17.037 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:17.297 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:17.297 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:17.297 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:17.297 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:17.556 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:17.556 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:17.556 { 00:51:17.556 "cntlid": 55, 00:51:17.556 "qid": 0, 00:51:17.556 "state": "enabled", 00:51:17.556 "thread": "nvmf_tgt_poll_group_000", 00:51:17.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:17.556 "listen_address": { 00:51:17.556 "trtype": "TCP", 00:51:17.556 "adrfam": "IPv4", 00:51:17.556 "traddr": "10.0.0.2", 00:51:17.556 "trsvcid": "4420" 00:51:17.556 }, 00:51:17.556 "peer_address": { 00:51:17.556 "trtype": "TCP", 00:51:17.556 "adrfam": "IPv4", 00:51:17.556 "traddr": "10.0.0.1", 00:51:17.556 "trsvcid": "33042" 00:51:17.556 }, 00:51:17.556 "auth": { 00:51:17.556 "state": "completed", 00:51:17.557 "digest": "sha384", 00:51:17.557 "dhgroup": "null" 00:51:17.557 } 00:51:17.557 } 00:51:17.557 ]' 00:51:17.557 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:17.557 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:17.557 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:17.557 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:51:17.557 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:17.557 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:17.557 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:17.557 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:17.816 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:17.816 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:18.756 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:18.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:18.756 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:18.756 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:18.756 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:18.756 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:18.756 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:51:18.756 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:18.756 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:18.756 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:19.017 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:51:19.017 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:19.017 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:19.017 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:51:19.017 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:51:19.017 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:19.017 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:19.017 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:19.017 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:19.017 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:19.017 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:19.017 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:19.017 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:19.277 00:51:19.277 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:19.277 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:19.277 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:19.537 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:19.537 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:19.537 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:19.537 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:19.537 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:19.537 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:19.537 { 00:51:19.537 "cntlid": 57, 00:51:19.537 "qid": 0, 00:51:19.537 "state": "enabled", 00:51:19.537 "thread": "nvmf_tgt_poll_group_000", 00:51:19.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:19.537 "listen_address": { 00:51:19.537 "trtype": "TCP", 00:51:19.537 "adrfam": "IPv4", 00:51:19.537 "traddr": "10.0.0.2", 00:51:19.537 "trsvcid": "4420" 00:51:19.537 }, 00:51:19.537 "peer_address": { 00:51:19.537 "trtype": "TCP", 00:51:19.537 "adrfam": "IPv4", 00:51:19.537 "traddr": "10.0.0.1", 00:51:19.537 "trsvcid": "33056" 00:51:19.537 }, 00:51:19.537 "auth": { 00:51:19.537 "state": "completed", 00:51:19.537 "digest": "sha384", 00:51:19.537 "dhgroup": "ffdhe2048" 00:51:19.537 } 00:51:19.537 } 00:51:19.537 ]' 00:51:19.537 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:19.537 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:19.537 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:19.797 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:51:19.797 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:19.797 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:19.797 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:19.797 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:20.057 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:20.057 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:20.998 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:20.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:20.998 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:20.998 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:20.998 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:20.998 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:20.998 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:20.998 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:20.998 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:20.998 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:21.568 00:51:21.568 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:21.568 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:21.568 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:21.568 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:21.568 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:21.568 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:21.568 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:21.568 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:21.568 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:21.568 { 00:51:21.568 "cntlid": 59, 00:51:21.568 "qid": 0, 00:51:21.568 "state": "enabled", 00:51:21.568 "thread": "nvmf_tgt_poll_group_000", 00:51:21.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:21.568 "listen_address": { 00:51:21.568 "trtype": "TCP", 00:51:21.568 "adrfam": "IPv4", 00:51:21.568 "traddr": "10.0.0.2", 00:51:21.568 "trsvcid": "4420" 00:51:21.568 }, 00:51:21.568 "peer_address": { 00:51:21.568 "trtype": "TCP", 00:51:21.568 "adrfam": "IPv4", 00:51:21.568 "traddr": "10.0.0.1", 00:51:21.568 "trsvcid": "33074" 00:51:21.568 }, 00:51:21.568 "auth": { 00:51:21.568 "state": "completed", 00:51:21.568 "digest": "sha384", 00:51:21.568 "dhgroup": "ffdhe2048" 00:51:21.568 } 00:51:21.568 } 00:51:21.568 ]' 00:51:21.568 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:21.828 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:21.828 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:21.828 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:51:21.828 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:21.828 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:21.828 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:21.828 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:22.089 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:22.089 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:23.029 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:23.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:23.029 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:23.029 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:23.029 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:23.029 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:23.029 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:23.029 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:23.029 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:23.029 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:23.600 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:23.600 { 00:51:23.600 "cntlid": 61, 00:51:23.600 "qid": 0, 00:51:23.600 "state": "enabled", 00:51:23.600 "thread": "nvmf_tgt_poll_group_000", 00:51:23.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:23.600 "listen_address": { 00:51:23.600 "trtype": "TCP", 00:51:23.600 "adrfam": "IPv4", 00:51:23.600 "traddr": "10.0.0.2", 00:51:23.600 "trsvcid": "4420" 00:51:23.600 }, 00:51:23.600 "peer_address": { 00:51:23.600 "trtype": "TCP", 00:51:23.600 "adrfam": "IPv4", 00:51:23.600 "traddr": "10.0.0.1", 00:51:23.600 "trsvcid": "33088" 00:51:23.600 }, 00:51:23.600 "auth": { 00:51:23.600 "state": "completed", 00:51:23.600 "digest": "sha384", 00:51:23.600 "dhgroup": "ffdhe2048" 00:51:23.600 } 00:51:23.600 } 00:51:23.600 ]' 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:23.600 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:23.860 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:51:23.860 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:23.860 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:23.860 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:23.860 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:24.121 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:24.121 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:25.062 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:25.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:25.062 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:25.062 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:25.062 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:25.062 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:25.062 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:25.062 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:25.062 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:25.062 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:51:25.063 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:25.063 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:25.633 00:51:25.633 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:25.633 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:25.633 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:25.893 { 00:51:25.893 "cntlid": 63, 00:51:25.893 "qid": 0, 00:51:25.893 "state": "enabled", 00:51:25.893 "thread": "nvmf_tgt_poll_group_000", 00:51:25.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:25.893 "listen_address": { 00:51:25.893 "trtype": "TCP", 00:51:25.893 "adrfam": "IPv4", 00:51:25.893 "traddr": "10.0.0.2", 00:51:25.893 "trsvcid": "4420" 00:51:25.893 }, 00:51:25.893 "peer_address": { 00:51:25.893 "trtype": "TCP", 00:51:25.893 "adrfam": "IPv4", 00:51:25.893 "traddr": "10.0.0.1", 00:51:25.893 "trsvcid": "37982" 00:51:25.893 }, 00:51:25.893 "auth": { 00:51:25.893 "state": "completed", 00:51:25.893 "digest": "sha384", 00:51:25.893 "dhgroup": "ffdhe2048" 00:51:25.893 } 00:51:25.893 } 00:51:25.893 ]' 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:25.893 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:26.153 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:26.153 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:27.093 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:27.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:27.093 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:27.093 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:27.093 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:27.093 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:27.093 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:51:27.093 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:27.093 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:27.093 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:27.093 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:27.665 00:51:27.665 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:27.665 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:27.665 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:27.925 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:27.925 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:27.925 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:27.925 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:27.925 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:27.925 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:27.925 { 00:51:27.925 "cntlid": 65, 00:51:27.925 "qid": 0, 00:51:27.925 "state": "enabled", 00:51:27.925 "thread": "nvmf_tgt_poll_group_000", 00:51:27.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:27.925 "listen_address": { 00:51:27.925 "trtype": "TCP", 00:51:27.925 "adrfam": "IPv4", 00:51:27.925 "traddr": "10.0.0.2", 00:51:27.925 "trsvcid": "4420" 00:51:27.925 }, 00:51:27.925 "peer_address": { 00:51:27.925 "trtype": "TCP", 00:51:27.925 "adrfam": "IPv4", 00:51:27.925 "traddr": "10.0.0.1", 00:51:27.925 "trsvcid": "38016" 00:51:27.925 }, 00:51:27.925 "auth": { 00:51:27.925 "state": "completed", 00:51:27.925 "digest": "sha384", 00:51:27.925 "dhgroup": "ffdhe3072" 00:51:27.925 } 00:51:27.925 } 00:51:27.925 ]' 00:51:27.925 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:27.925 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:27.925 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:27.925 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:51:27.925 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:27.925 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:27.925 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:27.925 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:28.494 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:28.494 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:29.064 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:29.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:29.064 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:29.064 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:29.064 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:29.064 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:29.064 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:29.064 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:29.064 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:29.324 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:29.584 00:51:29.584 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:29.584 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:29.584 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:29.843 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:29.843 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:29.843 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:29.843 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:29.843 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:29.843 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:29.843 { 00:51:29.843 "cntlid": 67, 00:51:29.843 "qid": 0, 00:51:29.843 "state": "enabled", 00:51:29.843 "thread": "nvmf_tgt_poll_group_000", 00:51:29.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:29.843 "listen_address": { 00:51:29.843 "trtype": "TCP", 00:51:29.843 "adrfam": "IPv4", 00:51:29.843 "traddr": "10.0.0.2", 00:51:29.843 "trsvcid": "4420" 00:51:29.843 }, 00:51:29.843 "peer_address": { 00:51:29.843 "trtype": "TCP", 00:51:29.843 "adrfam": "IPv4", 00:51:29.843 "traddr": "10.0.0.1", 00:51:29.843 "trsvcid": "38038" 00:51:29.843 }, 00:51:29.843 "auth": { 00:51:29.843 "state": "completed", 00:51:29.843 "digest": "sha384", 00:51:29.843 "dhgroup": "ffdhe3072" 00:51:29.843 } 00:51:29.843 } 00:51:29.843 ]' 00:51:29.843 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:29.843 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:29.843 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:30.102 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:51:30.102 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:30.102 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:30.102 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:30.102 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:30.362 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:30.362 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:31.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:31.300 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:31.869 00:51:31.869 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:31.869 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:31.869 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:31.869 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:31.869 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:31.869 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.869 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:31.869 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.869 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:31.869 { 00:51:31.869 "cntlid": 69, 00:51:31.869 "qid": 0, 00:51:31.870 "state": "enabled", 00:51:31.870 "thread": "nvmf_tgt_poll_group_000", 00:51:31.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:31.870 "listen_address": { 00:51:31.870 "trtype": "TCP", 00:51:31.870 "adrfam": "IPv4", 00:51:31.870 "traddr": "10.0.0.2", 00:51:31.870 "trsvcid": "4420" 00:51:31.870 }, 00:51:31.870 "peer_address": { 00:51:31.870 "trtype": "TCP", 00:51:31.870 "adrfam": "IPv4", 00:51:31.870 "traddr": "10.0.0.1", 00:51:31.870 "trsvcid": "38074" 00:51:31.870 }, 00:51:31.870 "auth": { 00:51:31.870 "state": "completed", 00:51:31.870 "digest": "sha384", 00:51:31.870 "dhgroup": "ffdhe3072" 00:51:31.870 } 00:51:31.870 } 00:51:31.870 ]' 00:51:31.870 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:32.130 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:32.130 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:32.130 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:51:32.130 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:32.130 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:32.130 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:32.130 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:32.389 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:32.389 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:32.958 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:32.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:32.958 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:32.958 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.958 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:32.958 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.958 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:32.958 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:32.958 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:33.218 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:33.787 00:51:33.787 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:33.787 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:33.787 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:34.048 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:34.049 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:34.049 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.049 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:34.049 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.049 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:34.049 { 00:51:34.049 "cntlid": 71, 00:51:34.049 "qid": 0, 00:51:34.049 "state": "enabled", 00:51:34.049 "thread": "nvmf_tgt_poll_group_000", 00:51:34.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:34.049 "listen_address": { 00:51:34.049 "trtype": "TCP", 00:51:34.049 "adrfam": "IPv4", 00:51:34.049 "traddr": "10.0.0.2", 00:51:34.049 "trsvcid": "4420" 00:51:34.049 }, 00:51:34.049 "peer_address": { 00:51:34.049 "trtype": "TCP", 00:51:34.049 "adrfam": "IPv4", 00:51:34.049 "traddr": "10.0.0.1", 00:51:34.049 "trsvcid": "38116" 00:51:34.049 }, 00:51:34.049 "auth": { 00:51:34.049 "state": "completed", 00:51:34.049 "digest": "sha384", 00:51:34.049 "dhgroup": "ffdhe3072" 00:51:34.049 } 00:51:34.049 } 00:51:34.049 ]' 00:51:34.049 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:34.049 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:34.049 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:34.049 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:51:34.049 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:34.049 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:34.049 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:34.049 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:34.310 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:34.310 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:35.248 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:35.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:35.248 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:35.248 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:35.248 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:35.248 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:35.248 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:51:35.248 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:35.248 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:35.248 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:35.508 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:35.767 00:51:35.767 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:35.767 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:35.767 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:36.028 { 00:51:36.028 "cntlid": 73, 00:51:36.028 "qid": 0, 00:51:36.028 "state": "enabled", 00:51:36.028 "thread": "nvmf_tgt_poll_group_000", 00:51:36.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:36.028 "listen_address": { 00:51:36.028 "trtype": "TCP", 00:51:36.028 "adrfam": "IPv4", 00:51:36.028 "traddr": "10.0.0.2", 00:51:36.028 "trsvcid": "4420" 00:51:36.028 }, 00:51:36.028 "peer_address": { 00:51:36.028 "trtype": "TCP", 00:51:36.028 "adrfam": "IPv4", 00:51:36.028 "traddr": "10.0.0.1", 00:51:36.028 "trsvcid": "44552" 00:51:36.028 }, 00:51:36.028 "auth": { 00:51:36.028 "state": "completed", 00:51:36.028 "digest": "sha384", 00:51:36.028 "dhgroup": "ffdhe4096" 00:51:36.028 } 00:51:36.028 } 00:51:36.028 ]' 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:51:36.028 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:36.288 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:36.288 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:36.288 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:36.547 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:36.547 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:37.116 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:37.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:37.116 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:37.116 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.116 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:37.116 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.116 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:37.116 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:37.116 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:37.376 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:37.945 00:51:37.945 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:37.945 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:37.945 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:37.945 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:37.945 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:37.945 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.945 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:37.945 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.945 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:37.945 { 00:51:37.945 "cntlid": 75, 00:51:37.945 "qid": 0, 00:51:37.945 "state": "enabled", 00:51:37.945 "thread": "nvmf_tgt_poll_group_000", 00:51:37.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:37.945 "listen_address": { 00:51:37.945 "trtype": "TCP", 00:51:37.945 "adrfam": "IPv4", 00:51:37.945 "traddr": "10.0.0.2", 00:51:37.945 "trsvcid": "4420" 00:51:37.945 }, 00:51:37.945 "peer_address": { 00:51:37.945 "trtype": "TCP", 00:51:37.945 "adrfam": "IPv4", 00:51:37.945 "traddr": "10.0.0.1", 00:51:37.945 "trsvcid": "44564" 00:51:37.945 }, 00:51:37.945 "auth": { 00:51:37.945 "state": "completed", 00:51:37.945 "digest": "sha384", 00:51:37.945 "dhgroup": "ffdhe4096" 00:51:37.945 } 00:51:37.945 } 00:51:37.945 ]' 00:51:37.945 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:38.205 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:38.205 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:38.205 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:51:38.205 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:38.205 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:38.205 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:38.205 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:38.464 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:38.464 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:39.402 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:39.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:39.402 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:39.402 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.402 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:39.402 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.402 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:39.402 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:39.402 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:39.662 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:39.922 00:51:39.922 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:39.922 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:39.922 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:40.182 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:40.182 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:40.182 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:40.182 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:40.182 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:40.182 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:40.182 { 00:51:40.182 "cntlid": 77, 00:51:40.182 "qid": 0, 00:51:40.182 "state": "enabled", 00:51:40.182 "thread": "nvmf_tgt_poll_group_000", 00:51:40.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:40.182 "listen_address": { 00:51:40.182 "trtype": "TCP", 00:51:40.182 "adrfam": "IPv4", 00:51:40.182 "traddr": "10.0.0.2", 00:51:40.182 "trsvcid": "4420" 00:51:40.182 }, 00:51:40.182 "peer_address": { 00:51:40.182 "trtype": "TCP", 00:51:40.182 "adrfam": "IPv4", 00:51:40.182 "traddr": "10.0.0.1", 00:51:40.182 "trsvcid": "44586" 00:51:40.182 }, 00:51:40.182 "auth": { 00:51:40.182 "state": "completed", 00:51:40.182 "digest": "sha384", 00:51:40.182 "dhgroup": "ffdhe4096" 00:51:40.182 } 00:51:40.182 } 00:51:40.182 ]' 00:51:40.182 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:40.442 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:40.442 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:40.442 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:51:40.442 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:40.442 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:40.442 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:40.442 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:40.702 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:40.702 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:41.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:41.644 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:41.905 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:41.905 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:51:41.906 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:41.906 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:42.166 00:51:42.166 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:42.166 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:42.166 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:42.426 { 00:51:42.426 "cntlid": 79, 00:51:42.426 "qid": 0, 00:51:42.426 "state": "enabled", 00:51:42.426 "thread": "nvmf_tgt_poll_group_000", 00:51:42.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:42.426 "listen_address": { 00:51:42.426 "trtype": "TCP", 00:51:42.426 "adrfam": "IPv4", 00:51:42.426 "traddr": "10.0.0.2", 00:51:42.426 "trsvcid": "4420" 00:51:42.426 }, 00:51:42.426 "peer_address": { 00:51:42.426 "trtype": "TCP", 00:51:42.426 "adrfam": "IPv4", 00:51:42.426 "traddr": "10.0.0.1", 00:51:42.426 "trsvcid": "44604" 00:51:42.426 }, 00:51:42.426 "auth": { 00:51:42.426 "state": "completed", 00:51:42.426 "digest": "sha384", 00:51:42.426 "dhgroup": "ffdhe4096" 00:51:42.426 } 00:51:42.426 } 00:51:42.426 ]' 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:42.426 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:42.997 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:42.997 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:43.564 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:43.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:43.564 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:43.564 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.564 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:43.564 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.564 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:51:43.564 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:43.564 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:43.564 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:43.823 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:44.390 00:51:44.390 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:44.390 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:44.390 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:44.649 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:44.649 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:44.649 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.649 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:44.649 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.649 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:44.649 { 00:51:44.649 "cntlid": 81, 00:51:44.649 "qid": 0, 00:51:44.649 "state": "enabled", 00:51:44.649 "thread": "nvmf_tgt_poll_group_000", 00:51:44.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:44.649 "listen_address": { 00:51:44.649 "trtype": "TCP", 00:51:44.649 "adrfam": "IPv4", 00:51:44.649 "traddr": "10.0.0.2", 00:51:44.649 "trsvcid": "4420" 00:51:44.649 }, 00:51:44.649 "peer_address": { 00:51:44.649 "trtype": "TCP", 00:51:44.649 "adrfam": "IPv4", 00:51:44.649 "traddr": "10.0.0.1", 00:51:44.649 "trsvcid": "44636" 00:51:44.649 }, 00:51:44.649 "auth": { 00:51:44.649 "state": "completed", 00:51:44.649 "digest": "sha384", 00:51:44.649 "dhgroup": "ffdhe6144" 00:51:44.649 } 00:51:44.649 } 00:51:44.649 ]' 00:51:44.649 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:44.649 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:44.649 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:44.908 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:51:44.908 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:44.908 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:44.908 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:44.908 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:45.168 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:45.168 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:46.106 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:46.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:46.106 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:46.106 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.106 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:46.106 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.106 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:46.106 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:46.106 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:46.106 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:46.677 00:51:46.677 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:46.677 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:46.677 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:46.938 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:46.938 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:46.938 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.938 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:46.938 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.938 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:46.938 { 00:51:46.938 "cntlid": 83, 00:51:46.938 "qid": 0, 00:51:46.938 "state": "enabled", 00:51:46.938 "thread": "nvmf_tgt_poll_group_000", 00:51:46.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:46.938 "listen_address": { 00:51:46.938 "trtype": "TCP", 00:51:46.938 "adrfam": "IPv4", 00:51:46.938 "traddr": "10.0.0.2", 00:51:46.938 "trsvcid": "4420" 00:51:46.938 }, 00:51:46.938 "peer_address": { 00:51:46.938 "trtype": "TCP", 00:51:46.938 "adrfam": "IPv4", 00:51:46.938 "traddr": "10.0.0.1", 00:51:46.938 "trsvcid": "41060" 00:51:46.938 }, 00:51:46.938 "auth": { 00:51:46.938 "state": "completed", 00:51:46.938 "digest": "sha384", 00:51:46.938 "dhgroup": "ffdhe6144" 00:51:46.938 } 00:51:46.938 } 00:51:46.938 ]' 00:51:46.938 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:46.938 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:46.938 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:47.198 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:51:47.198 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:47.198 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:47.198 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:47.198 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:47.459 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:47.459 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:48.400 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:48.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:48.400 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:48.400 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:48.400 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:48.400 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:48.400 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:48.400 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:48.400 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:48.400 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:48.660 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:48.919 00:51:48.920 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:48.920 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:48.920 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:49.179 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:49.179 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:49.179 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:49.179 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:49.438 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:49.438 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:49.438 { 00:51:49.438 "cntlid": 85, 00:51:49.438 "qid": 0, 00:51:49.438 "state": "enabled", 00:51:49.438 "thread": "nvmf_tgt_poll_group_000", 00:51:49.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:49.438 "listen_address": { 00:51:49.438 "trtype": "TCP", 00:51:49.438 "adrfam": "IPv4", 00:51:49.438 "traddr": "10.0.0.2", 00:51:49.438 "trsvcid": "4420" 00:51:49.438 }, 00:51:49.438 "peer_address": { 00:51:49.438 "trtype": "TCP", 00:51:49.438 "adrfam": "IPv4", 00:51:49.438 "traddr": "10.0.0.1", 00:51:49.438 "trsvcid": "41108" 00:51:49.438 }, 00:51:49.438 "auth": { 00:51:49.438 "state": "completed", 00:51:49.438 "digest": "sha384", 00:51:49.438 "dhgroup": "ffdhe6144" 00:51:49.438 } 00:51:49.438 } 00:51:49.438 ]' 00:51:49.438 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:49.438 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:49.438 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:49.438 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:51:49.438 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:49.438 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:49.438 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:49.438 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:49.698 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:49.698 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:50.643 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:50.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:50.643 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:50.643 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:50.643 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:50.643 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:50.643 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:50.643 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:50.643 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:50.903 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:51.163 00:51:51.163 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:51.163 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:51.163 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:51.423 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:51.423 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:51.423 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.423 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:51.423 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.423 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:51.423 { 00:51:51.423 "cntlid": 87, 00:51:51.423 "qid": 0, 00:51:51.423 "state": "enabled", 00:51:51.423 "thread": "nvmf_tgt_poll_group_000", 00:51:51.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:51.423 "listen_address": { 00:51:51.423 "trtype": "TCP", 00:51:51.423 "adrfam": "IPv4", 00:51:51.423 "traddr": "10.0.0.2", 00:51:51.423 "trsvcid": "4420" 00:51:51.423 }, 00:51:51.423 "peer_address": { 00:51:51.423 "trtype": "TCP", 00:51:51.423 "adrfam": "IPv4", 00:51:51.423 "traddr": "10.0.0.1", 00:51:51.423 "trsvcid": "41136" 00:51:51.423 }, 00:51:51.423 "auth": { 00:51:51.423 "state": "completed", 00:51:51.423 "digest": "sha384", 00:51:51.423 "dhgroup": "ffdhe6144" 00:51:51.423 } 00:51:51.423 } 00:51:51.423 ]' 00:51:51.423 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:51.683 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:51.683 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:51.683 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:51:51.683 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:51.683 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:51.683 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:51.683 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:51.943 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:51.943 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:51:52.883 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:52.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:52.883 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:52.883 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.883 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:52.883 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.883 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:51:52.883 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:52.883 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:52.883 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:52.883 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:53.824 00:51:53.824 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:53.824 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:53.824 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:53.824 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:53.824 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:53.824 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.824 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:53.824 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.824 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:53.824 { 00:51:53.824 "cntlid": 89, 00:51:53.824 "qid": 0, 00:51:53.824 "state": "enabled", 00:51:53.824 "thread": "nvmf_tgt_poll_group_000", 00:51:53.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:53.824 "listen_address": { 00:51:53.824 "trtype": "TCP", 00:51:53.824 "adrfam": "IPv4", 00:51:53.824 "traddr": "10.0.0.2", 00:51:53.824 "trsvcid": "4420" 00:51:53.824 }, 00:51:53.824 "peer_address": { 00:51:53.824 "trtype": "TCP", 00:51:53.824 "adrfam": "IPv4", 00:51:53.824 "traddr": "10.0.0.1", 00:51:53.824 "trsvcid": "41152" 00:51:53.824 }, 00:51:53.824 "auth": { 00:51:53.824 "state": "completed", 00:51:53.824 "digest": "sha384", 00:51:53.824 "dhgroup": "ffdhe8192" 00:51:53.824 } 00:51:53.824 } 00:51:53.824 ]' 00:51:53.824 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:54.084 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:54.084 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:54.084 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:54.084 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:54.084 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:54.084 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:54.084 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:54.343 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:54.343 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:51:54.914 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:54.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:54.914 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:54.914 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.914 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:54.914 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.914 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:54.914 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:54.914 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:55.485 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:51:55.485 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:55.485 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:55.485 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:55.485 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:51:55.486 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:55.486 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:55.486 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.486 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:55.486 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.486 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:55.486 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:55.486 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:56.056 00:51:56.056 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:56.056 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:56.056 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:56.316 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:56.316 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:56.316 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:56.316 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:56.316 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:56.316 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:56.316 { 00:51:56.316 "cntlid": 91, 00:51:56.316 "qid": 0, 00:51:56.316 "state": "enabled", 00:51:56.316 "thread": "nvmf_tgt_poll_group_000", 00:51:56.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:56.316 "listen_address": { 00:51:56.316 "trtype": "TCP", 00:51:56.316 "adrfam": "IPv4", 00:51:56.316 "traddr": "10.0.0.2", 00:51:56.316 "trsvcid": "4420" 00:51:56.316 }, 00:51:56.316 "peer_address": { 00:51:56.316 "trtype": "TCP", 00:51:56.316 "adrfam": "IPv4", 00:51:56.316 "traddr": "10.0.0.1", 00:51:56.316 "trsvcid": "59854" 00:51:56.316 }, 00:51:56.316 "auth": { 00:51:56.316 "state": "completed", 00:51:56.316 "digest": "sha384", 00:51:56.316 "dhgroup": "ffdhe8192" 00:51:56.316 } 00:51:56.316 } 00:51:56.316 ]' 00:51:56.316 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:56.316 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:56.316 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:56.317 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:56.317 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:56.317 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:56.317 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:56.317 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:56.577 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:56.577 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:51:57.518 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:57.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:57.518 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:51:57.518 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:57.518 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:57.518 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:57.518 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:57.518 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:57.518 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:57.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:57.779 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:58.348 00:51:58.348 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:58.348 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:58.348 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:58.608 { 00:51:58.608 "cntlid": 93, 00:51:58.608 "qid": 0, 00:51:58.608 "state": "enabled", 00:51:58.608 "thread": "nvmf_tgt_poll_group_000", 00:51:58.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:51:58.608 "listen_address": { 00:51:58.608 "trtype": "TCP", 00:51:58.608 "adrfam": "IPv4", 00:51:58.608 "traddr": "10.0.0.2", 00:51:58.608 "trsvcid": "4420" 00:51:58.608 }, 00:51:58.608 "peer_address": { 00:51:58.608 "trtype": "TCP", 00:51:58.608 "adrfam": "IPv4", 00:51:58.608 "traddr": "10.0.0.1", 00:51:58.608 "trsvcid": "59884" 00:51:58.608 }, 00:51:58.608 "auth": { 00:51:58.608 "state": "completed", 00:51:58.608 "digest": "sha384", 00:51:58.608 "dhgroup": "ffdhe8192" 00:51:58.608 } 00:51:58.608 } 00:51:58.608 ]' 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:58.608 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:58.868 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:58.868 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:58.868 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:59.128 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:51:59.128 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:00.069 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:00.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:00.069 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:00.069 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.069 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:00.069 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.069 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:00.069 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:52:00.069 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:00.069 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:00.639 00:52:00.639 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:00.639 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:00.639 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:00.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:00.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:00.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:00.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:00.900 { 00:52:00.900 "cntlid": 95, 00:52:00.900 "qid": 0, 00:52:00.900 "state": "enabled", 00:52:00.900 "thread": "nvmf_tgt_poll_group_000", 00:52:00.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:00.900 "listen_address": { 00:52:00.900 "trtype": "TCP", 00:52:00.900 "adrfam": "IPv4", 00:52:00.900 "traddr": "10.0.0.2", 00:52:00.900 "trsvcid": "4420" 00:52:00.900 }, 00:52:00.900 "peer_address": { 00:52:00.900 "trtype": "TCP", 00:52:00.900 "adrfam": "IPv4", 00:52:00.900 "traddr": "10.0.0.1", 00:52:00.900 "trsvcid": "59914" 00:52:00.900 }, 00:52:00.900 "auth": { 00:52:00.900 "state": "completed", 00:52:00.900 "digest": "sha384", 00:52:00.900 "dhgroup": "ffdhe8192" 00:52:00.900 } 00:52:00.900 } 00:52:00.900 ]' 00:52:00.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:01.160 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:52:01.160 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:01.160 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:52:01.160 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:01.160 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:01.160 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:01.160 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:01.420 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:01.420 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:02.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:02.363 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:02.624 00:52:02.884 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:02.884 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:02.884 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:03.144 { 00:52:03.144 "cntlid": 97, 00:52:03.144 "qid": 0, 00:52:03.144 "state": "enabled", 00:52:03.144 "thread": "nvmf_tgt_poll_group_000", 00:52:03.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:03.144 "listen_address": { 00:52:03.144 "trtype": "TCP", 00:52:03.144 "adrfam": "IPv4", 00:52:03.144 "traddr": "10.0.0.2", 00:52:03.144 "trsvcid": "4420" 00:52:03.144 }, 00:52:03.144 "peer_address": { 00:52:03.144 "trtype": "TCP", 00:52:03.144 "adrfam": "IPv4", 00:52:03.144 "traddr": "10.0.0.1", 00:52:03.144 "trsvcid": "59946" 00:52:03.144 }, 00:52:03.144 "auth": { 00:52:03.144 "state": "completed", 00:52:03.144 "digest": "sha512", 00:52:03.144 "dhgroup": "null" 00:52:03.144 } 00:52:03.144 } 00:52:03.144 ]' 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:03.144 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:03.403 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:03.403 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:04.343 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:04.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:04.343 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:04.343 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:04.343 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:04.343 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:04.343 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:04.343 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:52:04.343 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:04.603 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:04.604 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:04.864 00:52:04.864 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:04.864 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:04.864 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:05.124 { 00:52:05.124 "cntlid": 99, 00:52:05.124 "qid": 0, 00:52:05.124 "state": "enabled", 00:52:05.124 "thread": "nvmf_tgt_poll_group_000", 00:52:05.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:05.124 "listen_address": { 00:52:05.124 "trtype": "TCP", 00:52:05.124 "adrfam": "IPv4", 00:52:05.124 "traddr": "10.0.0.2", 00:52:05.124 "trsvcid": "4420" 00:52:05.124 }, 00:52:05.124 "peer_address": { 00:52:05.124 "trtype": "TCP", 00:52:05.124 "adrfam": "IPv4", 00:52:05.124 "traddr": "10.0.0.1", 00:52:05.124 "trsvcid": "39444" 00:52:05.124 }, 00:52:05.124 "auth": { 00:52:05.124 "state": "completed", 00:52:05.124 "digest": "sha512", 00:52:05.124 "dhgroup": "null" 00:52:05.124 } 00:52:05.124 } 00:52:05.124 ]' 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:05.124 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:05.385 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:05.385 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:06.334 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:06.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:06.334 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:06.334 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:06.334 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:06.334 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:06.334 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:06.334 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:52:06.335 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:06.595 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:06.856 00:52:06.856 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:06.856 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:06.856 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:07.116 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:07.116 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:07.116 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:07.116 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:07.116 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:07.116 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:07.116 { 00:52:07.116 "cntlid": 101, 00:52:07.116 "qid": 0, 00:52:07.116 "state": "enabled", 00:52:07.116 "thread": "nvmf_tgt_poll_group_000", 00:52:07.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:07.116 "listen_address": { 00:52:07.116 "trtype": "TCP", 00:52:07.116 "adrfam": "IPv4", 00:52:07.116 "traddr": "10.0.0.2", 00:52:07.116 "trsvcid": "4420" 00:52:07.116 }, 00:52:07.116 "peer_address": { 00:52:07.116 "trtype": "TCP", 00:52:07.116 "adrfam": "IPv4", 00:52:07.116 "traddr": "10.0.0.1", 00:52:07.116 "trsvcid": "39462" 00:52:07.116 }, 00:52:07.116 "auth": { 00:52:07.116 "state": "completed", 00:52:07.116 "digest": "sha512", 00:52:07.116 "dhgroup": "null" 00:52:07.116 } 00:52:07.116 } 00:52:07.116 ]' 00:52:07.116 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:07.116 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:07.116 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:07.376 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:52:07.377 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:07.377 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:07.377 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:07.377 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:07.637 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:07.637 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:08.206 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:08.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:08.206 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:08.206 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:08.206 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:08.467 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:09.037 00:52:09.037 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:09.037 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:09.037 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:09.037 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:09.037 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:09.038 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:09.038 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:09.297 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:09.297 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:09.297 { 00:52:09.297 "cntlid": 103, 00:52:09.297 "qid": 0, 00:52:09.297 "state": "enabled", 00:52:09.297 "thread": "nvmf_tgt_poll_group_000", 00:52:09.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:09.297 "listen_address": { 00:52:09.297 "trtype": "TCP", 00:52:09.297 "adrfam": "IPv4", 00:52:09.297 "traddr": "10.0.0.2", 00:52:09.297 "trsvcid": "4420" 00:52:09.297 }, 00:52:09.297 "peer_address": { 00:52:09.297 "trtype": "TCP", 00:52:09.297 "adrfam": "IPv4", 00:52:09.297 "traddr": "10.0.0.1", 00:52:09.297 "trsvcid": "39484" 00:52:09.297 }, 00:52:09.297 "auth": { 00:52:09.297 "state": "completed", 00:52:09.297 "digest": "sha512", 00:52:09.297 "dhgroup": "null" 00:52:09.297 } 00:52:09.297 } 00:52:09.297 ]' 00:52:09.297 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:09.297 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:09.297 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:09.297 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:52:09.297 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:09.297 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:09.297 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:09.297 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:09.557 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:09.557 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:10.496 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:10.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:10.496 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:10.496 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:10.496 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:10.496 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:10.496 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:52:10.496 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:10.496 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:10.496 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:10.756 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:11.016 00:52:11.016 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:11.016 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:11.016 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:11.277 { 00:52:11.277 "cntlid": 105, 00:52:11.277 "qid": 0, 00:52:11.277 "state": "enabled", 00:52:11.277 "thread": "nvmf_tgt_poll_group_000", 00:52:11.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:11.277 "listen_address": { 00:52:11.277 "trtype": "TCP", 00:52:11.277 "adrfam": "IPv4", 00:52:11.277 "traddr": "10.0.0.2", 00:52:11.277 "trsvcid": "4420" 00:52:11.277 }, 00:52:11.277 "peer_address": { 00:52:11.277 "trtype": "TCP", 00:52:11.277 "adrfam": "IPv4", 00:52:11.277 "traddr": "10.0.0.1", 00:52:11.277 "trsvcid": "39506" 00:52:11.277 }, 00:52:11.277 "auth": { 00:52:11.277 "state": "completed", 00:52:11.277 "digest": "sha512", 00:52:11.277 "dhgroup": "ffdhe2048" 00:52:11.277 } 00:52:11.277 } 00:52:11.277 ]' 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:11.277 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:11.847 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:11.847 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:12.418 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:12.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:12.418 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:12.418 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:12.418 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:12.418 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:12.418 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:12.418 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:12.418 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:12.677 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:52:12.677 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:12.677 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:12.677 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:52:12.677 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:52:12.677 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:12.678 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:12.678 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:12.678 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:12.678 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:12.678 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:12.678 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:12.678 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:12.937 00:52:12.938 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:12.938 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:12.938 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:13.197 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:13.197 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:13.197 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:13.197 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:13.197 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:13.197 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:13.197 { 00:52:13.197 "cntlid": 107, 00:52:13.197 "qid": 0, 00:52:13.197 "state": "enabled", 00:52:13.197 "thread": "nvmf_tgt_poll_group_000", 00:52:13.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:13.197 "listen_address": { 00:52:13.197 "trtype": "TCP", 00:52:13.197 "adrfam": "IPv4", 00:52:13.197 "traddr": "10.0.0.2", 00:52:13.197 "trsvcid": "4420" 00:52:13.197 }, 00:52:13.197 "peer_address": { 00:52:13.197 "trtype": "TCP", 00:52:13.197 "adrfam": "IPv4", 00:52:13.197 "traddr": "10.0.0.1", 00:52:13.197 "trsvcid": "39524" 00:52:13.197 }, 00:52:13.197 "auth": { 00:52:13.197 "state": "completed", 00:52:13.197 "digest": "sha512", 00:52:13.197 "dhgroup": "ffdhe2048" 00:52:13.197 } 00:52:13.197 } 00:52:13.197 ]' 00:52:13.197 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:13.457 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:13.457 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:13.457 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:52:13.457 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:13.457 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:13.457 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:13.457 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:13.717 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:13.717 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:14.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:14.657 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:14.658 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:14.918 00:52:14.918 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:14.918 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:14.918 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:15.178 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:15.179 { 00:52:15.179 "cntlid": 109, 00:52:15.179 "qid": 0, 00:52:15.179 "state": "enabled", 00:52:15.179 "thread": "nvmf_tgt_poll_group_000", 00:52:15.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:15.179 "listen_address": { 00:52:15.179 "trtype": "TCP", 00:52:15.179 "adrfam": "IPv4", 00:52:15.179 "traddr": "10.0.0.2", 00:52:15.179 "trsvcid": "4420" 00:52:15.179 }, 00:52:15.179 "peer_address": { 00:52:15.179 "trtype": "TCP", 00:52:15.179 "adrfam": "IPv4", 00:52:15.179 "traddr": "10.0.0.1", 00:52:15.179 "trsvcid": "40538" 00:52:15.179 }, 00:52:15.179 "auth": { 00:52:15.179 "state": "completed", 00:52:15.179 "digest": "sha512", 00:52:15.179 "dhgroup": "ffdhe2048" 00:52:15.179 } 00:52:15.179 } 00:52:15.179 ]' 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:15.179 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:15.439 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:15.699 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:15.699 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:16.270 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:16.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:16.270 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:16.270 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:16.270 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:16.530 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:16.530 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:16.530 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:16.530 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:16.791 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:17.051 00:52:17.051 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:17.051 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:17.051 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:17.311 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:17.311 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:17.311 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:17.311 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:17.311 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:17.311 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:17.311 { 00:52:17.311 "cntlid": 111, 00:52:17.311 "qid": 0, 00:52:17.311 "state": "enabled", 00:52:17.311 "thread": "nvmf_tgt_poll_group_000", 00:52:17.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:17.311 "listen_address": { 00:52:17.311 "trtype": "TCP", 00:52:17.311 "adrfam": "IPv4", 00:52:17.311 "traddr": "10.0.0.2", 00:52:17.311 "trsvcid": "4420" 00:52:17.311 }, 00:52:17.311 "peer_address": { 00:52:17.311 "trtype": "TCP", 00:52:17.311 "adrfam": "IPv4", 00:52:17.311 "traddr": "10.0.0.1", 00:52:17.311 "trsvcid": "40564" 00:52:17.311 }, 00:52:17.311 "auth": { 00:52:17.311 "state": "completed", 00:52:17.311 "digest": "sha512", 00:52:17.311 "dhgroup": "ffdhe2048" 00:52:17.311 } 00:52:17.311 } 00:52:17.311 ]' 00:52:17.311 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:17.311 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:17.311 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:17.571 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:52:17.571 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:17.571 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:17.571 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:17.571 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:17.831 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:17.831 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:18.402 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:18.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:18.663 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:18.663 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:18.663 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:18.663 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:18.663 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:52:18.663 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:18.663 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:18.663 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:18.923 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:19.183 00:52:19.183 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:19.183 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:19.183 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:19.443 { 00:52:19.443 "cntlid": 113, 00:52:19.443 "qid": 0, 00:52:19.443 "state": "enabled", 00:52:19.443 "thread": "nvmf_tgt_poll_group_000", 00:52:19.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:19.443 "listen_address": { 00:52:19.443 "trtype": "TCP", 00:52:19.443 "adrfam": "IPv4", 00:52:19.443 "traddr": "10.0.0.2", 00:52:19.443 "trsvcid": "4420" 00:52:19.443 }, 00:52:19.443 "peer_address": { 00:52:19.443 "trtype": "TCP", 00:52:19.443 "adrfam": "IPv4", 00:52:19.443 "traddr": "10.0.0.1", 00:52:19.443 "trsvcid": "40590" 00:52:19.443 }, 00:52:19.443 "auth": { 00:52:19.443 "state": "completed", 00:52:19.443 "digest": "sha512", 00:52:19.443 "dhgroup": "ffdhe3072" 00:52:19.443 } 00:52:19.443 } 00:52:19.443 ]' 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:52:19.443 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:19.703 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:19.703 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:19.703 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:19.963 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:19.963 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:20.532 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:20.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:20.532 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:20.532 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:20.532 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:20.532 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:20.532 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:20.532 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:20.532 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:21.102 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:21.362 00:52:21.362 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:21.362 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:21.362 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:21.623 { 00:52:21.623 "cntlid": 115, 00:52:21.623 "qid": 0, 00:52:21.623 "state": "enabled", 00:52:21.623 "thread": "nvmf_tgt_poll_group_000", 00:52:21.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:21.623 "listen_address": { 00:52:21.623 "trtype": "TCP", 00:52:21.623 "adrfam": "IPv4", 00:52:21.623 "traddr": "10.0.0.2", 00:52:21.623 "trsvcid": "4420" 00:52:21.623 }, 00:52:21.623 "peer_address": { 00:52:21.623 "trtype": "TCP", 00:52:21.623 "adrfam": "IPv4", 00:52:21.623 "traddr": "10.0.0.1", 00:52:21.623 "trsvcid": "40612" 00:52:21.623 }, 00:52:21.623 "auth": { 00:52:21.623 "state": "completed", 00:52:21.623 "digest": "sha512", 00:52:21.623 "dhgroup": "ffdhe3072" 00:52:21.623 } 00:52:21.623 } 00:52:21.623 ]' 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:21.623 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:22.194 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:22.194 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:22.764 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:22.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:22.764 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:22.764 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.764 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:22.764 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.764 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:22.764 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:22.764 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:23.023 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:23.283 00:52:23.543 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:23.543 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:23.543 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:23.803 { 00:52:23.803 "cntlid": 117, 00:52:23.803 "qid": 0, 00:52:23.803 "state": "enabled", 00:52:23.803 "thread": "nvmf_tgt_poll_group_000", 00:52:23.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:23.803 "listen_address": { 00:52:23.803 "trtype": "TCP", 00:52:23.803 "adrfam": "IPv4", 00:52:23.803 "traddr": "10.0.0.2", 00:52:23.803 "trsvcid": "4420" 00:52:23.803 }, 00:52:23.803 "peer_address": { 00:52:23.803 "trtype": "TCP", 00:52:23.803 "adrfam": "IPv4", 00:52:23.803 "traddr": "10.0.0.1", 00:52:23.803 "trsvcid": "40646" 00:52:23.803 }, 00:52:23.803 "auth": { 00:52:23.803 "state": "completed", 00:52:23.803 "digest": "sha512", 00:52:23.803 "dhgroup": "ffdhe3072" 00:52:23.803 } 00:52:23.803 } 00:52:23.803 ]' 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:23.803 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:24.065 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:24.065 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:25.006 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:25.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:25.006 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:25.006 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.006 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:25.006 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.006 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:25.006 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:25.006 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:25.266 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:25.526 00:52:25.526 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:25.526 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:25.526 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:25.787 { 00:52:25.787 "cntlid": 119, 00:52:25.787 "qid": 0, 00:52:25.787 "state": "enabled", 00:52:25.787 "thread": "nvmf_tgt_poll_group_000", 00:52:25.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:25.787 "listen_address": { 00:52:25.787 "trtype": "TCP", 00:52:25.787 "adrfam": "IPv4", 00:52:25.787 "traddr": "10.0.0.2", 00:52:25.787 "trsvcid": "4420" 00:52:25.787 }, 00:52:25.787 "peer_address": { 00:52:25.787 "trtype": "TCP", 00:52:25.787 "adrfam": "IPv4", 00:52:25.787 "traddr": "10.0.0.1", 00:52:25.787 "trsvcid": "57748" 00:52:25.787 }, 00:52:25.787 "auth": { 00:52:25.787 "state": "completed", 00:52:25.787 "digest": "sha512", 00:52:25.787 "dhgroup": "ffdhe3072" 00:52:25.787 } 00:52:25.787 } 00:52:25.787 ]' 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:25.787 11:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:26.047 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:26.047 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:26.987 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:26.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:26.987 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:26.987 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.987 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:26.987 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.987 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:52:26.987 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:26.987 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:26.987 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:27.247 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:52:27.247 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:27.247 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:27.247 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:52:27.247 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:52:27.248 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:27.248 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:27.248 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.248 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:27.248 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.248 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:27.248 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:27.248 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:27.508 00:52:27.508 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:27.508 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:27.508 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:27.767 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:27.767 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:27.767 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.767 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:28.026 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:28.026 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:28.026 { 00:52:28.026 "cntlid": 121, 00:52:28.026 "qid": 0, 00:52:28.026 "state": "enabled", 00:52:28.026 "thread": "nvmf_tgt_poll_group_000", 00:52:28.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:28.026 "listen_address": { 00:52:28.026 "trtype": "TCP", 00:52:28.026 "adrfam": "IPv4", 00:52:28.026 "traddr": "10.0.0.2", 00:52:28.026 "trsvcid": "4420" 00:52:28.026 }, 00:52:28.026 "peer_address": { 00:52:28.026 "trtype": "TCP", 00:52:28.026 "adrfam": "IPv4", 00:52:28.026 "traddr": "10.0.0.1", 00:52:28.026 "trsvcid": "57784" 00:52:28.026 }, 00:52:28.026 "auth": { 00:52:28.026 "state": "completed", 00:52:28.026 "digest": "sha512", 00:52:28.026 "dhgroup": "ffdhe4096" 00:52:28.026 } 00:52:28.026 } 00:52:28.026 ]' 00:52:28.026 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:28.026 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:28.026 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:28.026 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:52:28.026 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:28.026 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:28.026 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:28.026 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:28.286 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:28.286 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:29.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:29.225 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:29.795 00:52:29.795 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:29.795 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:29.795 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:30.055 { 00:52:30.055 "cntlid": 123, 00:52:30.055 "qid": 0, 00:52:30.055 "state": "enabled", 00:52:30.055 "thread": "nvmf_tgt_poll_group_000", 00:52:30.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:30.055 "listen_address": { 00:52:30.055 "trtype": "TCP", 00:52:30.055 "adrfam": "IPv4", 00:52:30.055 "traddr": "10.0.0.2", 00:52:30.055 "trsvcid": "4420" 00:52:30.055 }, 00:52:30.055 "peer_address": { 00:52:30.055 "trtype": "TCP", 00:52:30.055 "adrfam": "IPv4", 00:52:30.055 "traddr": "10.0.0.1", 00:52:30.055 "trsvcid": "57828" 00:52:30.055 }, 00:52:30.055 "auth": { 00:52:30.055 "state": "completed", 00:52:30.055 "digest": "sha512", 00:52:30.055 "dhgroup": "ffdhe4096" 00:52:30.055 } 00:52:30.055 } 00:52:30.055 ]' 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:30.055 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:30.317 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:30.317 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:31.258 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:31.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:31.258 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:31.258 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:31.258 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:31.258 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:31.258 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:31.258 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:31.258 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:31.518 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:31.519 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:31.779 00:52:31.779 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:31.779 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:31.779 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:32.039 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:32.039 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:32.039 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:32.039 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:32.039 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:32.039 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:32.039 { 00:52:32.039 "cntlid": 125, 00:52:32.039 "qid": 0, 00:52:32.039 "state": "enabled", 00:52:32.039 "thread": "nvmf_tgt_poll_group_000", 00:52:32.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:32.039 "listen_address": { 00:52:32.039 "trtype": "TCP", 00:52:32.039 "adrfam": "IPv4", 00:52:32.039 "traddr": "10.0.0.2", 00:52:32.039 "trsvcid": "4420" 00:52:32.039 }, 00:52:32.039 "peer_address": { 00:52:32.039 "trtype": "TCP", 00:52:32.039 "adrfam": "IPv4", 00:52:32.039 "traddr": "10.0.0.1", 00:52:32.039 "trsvcid": "57860" 00:52:32.039 }, 00:52:32.039 "auth": { 00:52:32.039 "state": "completed", 00:52:32.039 "digest": "sha512", 00:52:32.039 "dhgroup": "ffdhe4096" 00:52:32.039 } 00:52:32.039 } 00:52:32.039 ]' 00:52:32.039 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:32.299 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:32.299 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:32.299 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:52:32.299 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:32.299 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:32.299 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:32.299 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:32.559 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:32.559 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:33.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:33.499 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.759 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:52:33.759 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:33.759 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:34.018 00:52:34.018 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:34.018 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:34.018 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:34.278 { 00:52:34.278 "cntlid": 127, 00:52:34.278 "qid": 0, 00:52:34.278 "state": "enabled", 00:52:34.278 "thread": "nvmf_tgt_poll_group_000", 00:52:34.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:34.278 "listen_address": { 00:52:34.278 "trtype": "TCP", 00:52:34.278 "adrfam": "IPv4", 00:52:34.278 "traddr": "10.0.0.2", 00:52:34.278 "trsvcid": "4420" 00:52:34.278 }, 00:52:34.278 "peer_address": { 00:52:34.278 "trtype": "TCP", 00:52:34.278 "adrfam": "IPv4", 00:52:34.278 "traddr": "10.0.0.1", 00:52:34.278 "trsvcid": "57886" 00:52:34.278 }, 00:52:34.278 "auth": { 00:52:34.278 "state": "completed", 00:52:34.278 "digest": "sha512", 00:52:34.278 "dhgroup": "ffdhe4096" 00:52:34.278 } 00:52:34.278 } 00:52:34.278 ]' 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:34.278 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:34.538 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:34.538 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:35.479 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:35.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:35.479 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:35.479 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.479 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:35.479 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.479 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:52:35.479 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:35.479 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:35.479 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:35.740 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:36.310 00:52:36.310 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:36.310 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:36.310 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:36.570 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:36.570 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:36.570 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.570 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:36.570 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.570 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:36.570 { 00:52:36.570 "cntlid": 129, 00:52:36.570 "qid": 0, 00:52:36.570 "state": "enabled", 00:52:36.570 "thread": "nvmf_tgt_poll_group_000", 00:52:36.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:36.571 "listen_address": { 00:52:36.571 "trtype": "TCP", 00:52:36.571 "adrfam": "IPv4", 00:52:36.571 "traddr": "10.0.0.2", 00:52:36.571 "trsvcid": "4420" 00:52:36.571 }, 00:52:36.571 "peer_address": { 00:52:36.571 "trtype": "TCP", 00:52:36.571 "adrfam": "IPv4", 00:52:36.571 "traddr": "10.0.0.1", 00:52:36.571 "trsvcid": "46348" 00:52:36.571 }, 00:52:36.571 "auth": { 00:52:36.571 "state": "completed", 00:52:36.571 "digest": "sha512", 00:52:36.571 "dhgroup": "ffdhe6144" 00:52:36.571 } 00:52:36.571 } 00:52:36.571 ]' 00:52:36.571 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:36.571 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:36.571 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:36.571 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:52:36.571 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:36.571 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:36.571 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:36.571 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:36.830 11:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:36.830 11:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:37.771 11:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:37.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:37.771 11:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:37.771 11:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.771 11:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:37.771 11:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.771 11:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:37.771 11:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:37.771 11:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:38.031 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:38.602 00:52:38.602 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:38.602 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:38.602 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:38.861 { 00:52:38.861 "cntlid": 131, 00:52:38.861 "qid": 0, 00:52:38.861 "state": "enabled", 00:52:38.861 "thread": "nvmf_tgt_poll_group_000", 00:52:38.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:38.861 "listen_address": { 00:52:38.861 "trtype": "TCP", 00:52:38.861 "adrfam": "IPv4", 00:52:38.861 "traddr": "10.0.0.2", 00:52:38.861 "trsvcid": "4420" 00:52:38.861 }, 00:52:38.861 "peer_address": { 00:52:38.861 "trtype": "TCP", 00:52:38.861 "adrfam": "IPv4", 00:52:38.861 "traddr": "10.0.0.1", 00:52:38.861 "trsvcid": "46382" 00:52:38.861 }, 00:52:38.861 "auth": { 00:52:38.861 "state": "completed", 00:52:38.861 "digest": "sha512", 00:52:38.861 "dhgroup": "ffdhe6144" 00:52:38.861 } 00:52:38.861 } 00:52:38.861 ]' 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:38.861 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:39.121 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:39.121 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:40.060 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:40.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:40.060 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:40.060 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:40.060 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:40.060 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:40.060 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:40.060 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:40.060 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:40.320 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:40.890 00:52:40.890 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:40.890 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:40.890 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:41.149 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:41.149 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:41.149 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:41.149 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:41.150 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:41.150 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:41.150 { 00:52:41.150 "cntlid": 133, 00:52:41.150 "qid": 0, 00:52:41.150 "state": "enabled", 00:52:41.150 "thread": "nvmf_tgt_poll_group_000", 00:52:41.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:41.150 "listen_address": { 00:52:41.150 "trtype": "TCP", 00:52:41.150 "adrfam": "IPv4", 00:52:41.150 "traddr": "10.0.0.2", 00:52:41.150 "trsvcid": "4420" 00:52:41.150 }, 00:52:41.150 "peer_address": { 00:52:41.150 "trtype": "TCP", 00:52:41.150 "adrfam": "IPv4", 00:52:41.150 "traddr": "10.0.0.1", 00:52:41.150 "trsvcid": "46420" 00:52:41.150 }, 00:52:41.150 "auth": { 00:52:41.150 "state": "completed", 00:52:41.150 "digest": "sha512", 00:52:41.150 "dhgroup": "ffdhe6144" 00:52:41.150 } 00:52:41.150 } 00:52:41.150 ]' 00:52:41.150 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:41.150 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:41.150 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:41.150 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:52:41.150 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:41.150 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:41.150 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:41.150 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:41.409 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:41.409 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:42.349 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:42.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:42.350 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:42.350 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:42.350 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:42.350 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:42.350 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:42.350 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:42.350 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:42.610 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:43.180 00:52:43.180 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:43.180 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:43.180 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:43.441 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:43.441 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:43.441 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:43.441 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:43.441 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:43.441 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:43.441 { 00:52:43.441 "cntlid": 135, 00:52:43.441 "qid": 0, 00:52:43.441 "state": "enabled", 00:52:43.441 "thread": "nvmf_tgt_poll_group_000", 00:52:43.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:43.441 "listen_address": { 00:52:43.441 "trtype": "TCP", 00:52:43.441 "adrfam": "IPv4", 00:52:43.441 "traddr": "10.0.0.2", 00:52:43.441 "trsvcid": "4420" 00:52:43.441 }, 00:52:43.441 "peer_address": { 00:52:43.441 "trtype": "TCP", 00:52:43.441 "adrfam": "IPv4", 00:52:43.441 "traddr": "10.0.0.1", 00:52:43.441 "trsvcid": "46448" 00:52:43.441 }, 00:52:43.442 "auth": { 00:52:43.442 "state": "completed", 00:52:43.442 "digest": "sha512", 00:52:43.442 "dhgroup": "ffdhe6144" 00:52:43.442 } 00:52:43.442 } 00:52:43.442 ]' 00:52:43.442 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:43.442 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:43.442 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:43.442 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:52:43.442 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:43.442 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:43.442 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:43.442 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:43.702 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:43.702 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:44.642 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:44.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:44.642 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:44.642 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:44.642 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:44.642 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:44.642 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:52:44.642 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:44.642 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:44.642 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:44.902 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:52:44.902 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:44.902 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:44.902 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:52:44.902 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:52:44.903 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:44.903 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:44.903 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:44.903 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:44.903 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:44.903 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:44.903 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:44.903 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:45.615 00:52:45.615 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:45.615 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:45.615 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:45.615 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:45.615 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:45.615 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.615 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:45.615 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.615 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:45.615 { 00:52:45.615 "cntlid": 137, 00:52:45.615 "qid": 0, 00:52:45.615 "state": "enabled", 00:52:45.615 "thread": "nvmf_tgt_poll_group_000", 00:52:45.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:45.615 "listen_address": { 00:52:45.615 "trtype": "TCP", 00:52:45.615 "adrfam": "IPv4", 00:52:45.615 "traddr": "10.0.0.2", 00:52:45.615 "trsvcid": "4420" 00:52:45.615 }, 00:52:45.615 "peer_address": { 00:52:45.615 "trtype": "TCP", 00:52:45.615 "adrfam": "IPv4", 00:52:45.615 "traddr": "10.0.0.1", 00:52:45.615 "trsvcid": "49282" 00:52:45.615 }, 00:52:45.615 "auth": { 00:52:45.615 "state": "completed", 00:52:45.615 "digest": "sha512", 00:52:45.615 "dhgroup": "ffdhe8192" 00:52:45.615 } 00:52:45.615 } 00:52:45.615 ]' 00:52:45.615 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:45.933 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:45.933 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:45.933 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:52:45.933 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:45.933 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:45.933 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:45.933 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:45.933 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:45.933 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:46.622 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:46.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:46.622 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:46.622 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.942 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:46.942 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.942 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:46.942 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:46.942 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:46.942 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:47.541 00:52:47.541 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:47.541 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:47.541 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:47.799 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:47.800 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:47.800 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.800 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:47.800 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.800 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:47.800 { 00:52:47.800 "cntlid": 139, 00:52:47.800 "qid": 0, 00:52:47.800 "state": "enabled", 00:52:47.800 "thread": "nvmf_tgt_poll_group_000", 00:52:47.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:47.800 "listen_address": { 00:52:47.800 "trtype": "TCP", 00:52:47.800 "adrfam": "IPv4", 00:52:47.800 "traddr": "10.0.0.2", 00:52:47.800 "trsvcid": "4420" 00:52:47.800 }, 00:52:47.800 "peer_address": { 00:52:47.800 "trtype": "TCP", 00:52:47.800 "adrfam": "IPv4", 00:52:47.800 "traddr": "10.0.0.1", 00:52:47.800 "trsvcid": "49306" 00:52:47.800 }, 00:52:47.800 "auth": { 00:52:47.800 "state": "completed", 00:52:47.800 "digest": "sha512", 00:52:47.800 "dhgroup": "ffdhe8192" 00:52:47.800 } 00:52:47.800 } 00:52:47.800 ]' 00:52:47.800 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:48.060 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:48.060 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:48.060 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:52:48.060 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:48.060 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:48.060 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:48.060 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:48.319 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:48.319 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: --dhchap-ctrl-secret DHHC-1:02:NTNjZmRiM2Y2ODQyMzQ2M2E1MDRlZDAxYjA2ODI5NTEzMjY1MDU0OGE5ODc5MzEz+hp9nQ==: 00:52:49.257 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:49.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:49.257 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:49.257 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:49.257 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:49.258 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:49.258 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:49.258 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:49.258 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:49.517 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:50.086 00:52:50.086 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:50.086 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:50.086 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:50.346 { 00:52:50.346 "cntlid": 141, 00:52:50.346 "qid": 0, 00:52:50.346 "state": "enabled", 00:52:50.346 "thread": "nvmf_tgt_poll_group_000", 00:52:50.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:50.346 "listen_address": { 00:52:50.346 "trtype": "TCP", 00:52:50.346 "adrfam": "IPv4", 00:52:50.346 "traddr": "10.0.0.2", 00:52:50.346 "trsvcid": "4420" 00:52:50.346 }, 00:52:50.346 "peer_address": { 00:52:50.346 "trtype": "TCP", 00:52:50.346 "adrfam": "IPv4", 00:52:50.346 "traddr": "10.0.0.1", 00:52:50.346 "trsvcid": "49332" 00:52:50.346 }, 00:52:50.346 "auth": { 00:52:50.346 "state": "completed", 00:52:50.346 "digest": "sha512", 00:52:50.346 "dhgroup": "ffdhe8192" 00:52:50.346 } 00:52:50.346 } 00:52:50.346 ]' 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:52:50.346 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:50.606 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:50.606 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:50.606 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:50.865 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:50.865 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ0NGFkMTg5NWNhZjlhODdkMGRiMDJlMzU1YjBkZDLSJzE1: 00:52:51.434 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:51.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:51.434 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:51.434 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:51.434 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:51.434 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:51.434 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:52:51.434 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:51.434 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:52.003 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:52:52.003 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:52.003 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:52.003 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:52:52.003 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:52:52.003 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:52.004 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:52:52.004 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:52.004 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:52.004 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:52.004 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:52:52.004 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:52.004 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:52.264 00:52:52.524 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:52.524 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:52.524 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:52.524 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:52.524 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:52.524 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:52.524 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:52.524 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:52.524 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:52.524 { 00:52:52.524 "cntlid": 143, 00:52:52.524 "qid": 0, 00:52:52.524 "state": "enabled", 00:52:52.524 "thread": "nvmf_tgt_poll_group_000", 00:52:52.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:52.524 "listen_address": { 00:52:52.524 "trtype": "TCP", 00:52:52.524 "adrfam": "IPv4", 00:52:52.524 "traddr": "10.0.0.2", 00:52:52.524 "trsvcid": "4420" 00:52:52.524 }, 00:52:52.524 "peer_address": { 00:52:52.524 "trtype": "TCP", 00:52:52.524 "adrfam": "IPv4", 00:52:52.524 "traddr": "10.0.0.1", 00:52:52.524 "trsvcid": "49368" 00:52:52.524 }, 00:52:52.524 "auth": { 00:52:52.524 "state": "completed", 00:52:52.524 "digest": "sha512", 00:52:52.524 "dhgroup": "ffdhe8192" 00:52:52.524 } 00:52:52.524 } 00:52:52.524 ]' 00:52:52.524 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:52.783 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:52.783 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:52.783 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:52:52.783 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:52.783 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:52.783 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:52.783 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:53.042 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:53.042 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:52:53.611 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:53.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:53.611 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:53.611 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:53.611 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:53.611 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:53.871 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:52:53.871 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:52:53.871 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:52:53.871 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:52:53.871 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:52:53.871 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:54.135 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:54.710 00:52:54.710 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:52:54.710 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:52:54.710 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:52:54.969 { 00:52:54.969 "cntlid": 145, 00:52:54.969 "qid": 0, 00:52:54.969 "state": "enabled", 00:52:54.969 "thread": "nvmf_tgt_poll_group_000", 00:52:54.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:54.969 "listen_address": { 00:52:54.969 "trtype": "TCP", 00:52:54.969 "adrfam": "IPv4", 00:52:54.969 "traddr": "10.0.0.2", 00:52:54.969 "trsvcid": "4420" 00:52:54.969 }, 00:52:54.969 "peer_address": { 00:52:54.969 "trtype": "TCP", 00:52:54.969 "adrfam": "IPv4", 00:52:54.969 "traddr": "10.0.0.1", 00:52:54.969 "trsvcid": "49396" 00:52:54.969 }, 00:52:54.969 "auth": { 00:52:54.969 "state": "completed", 00:52:54.969 "digest": "sha512", 00:52:54.969 "dhgroup": "ffdhe8192" 00:52:54.969 } 00:52:54.969 } 00:52:54.969 ]' 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:52:54.969 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:52:54.970 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:52:54.970 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:52:54.970 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:52:55.539 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:55.539 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:YTFhNTQzMzM3ZjRiYTUyMWY3OWI4ODM4Mjc3YTk2NWU0Njc3MjgyZjBiOWVhOTMy6D7FSg==: --dhchap-ctrl-secret DHHC-1:03:ZTZiMjE2ZWI5NjMxZjlkZGY0YmFmODFjOWVkYjI3ZGY2YmE4ZjlkZTdkOGI1YjIyZGQ2M2IyODM3ZDA0MmE4Ng1xn5E=: 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:52:56.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:52:56.108 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:52:56.677 request: 00:52:56.677 { 00:52:56.677 "name": "nvme0", 00:52:56.677 "trtype": "tcp", 00:52:56.677 "traddr": "10.0.0.2", 00:52:56.677 "adrfam": "ipv4", 00:52:56.677 "trsvcid": "4420", 00:52:56.677 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:52:56.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:56.677 "prchk_reftag": false, 00:52:56.677 "prchk_guard": false, 00:52:56.677 "hdgst": false, 00:52:56.677 "ddgst": false, 00:52:56.677 "dhchap_key": "key2", 00:52:56.677 "allow_unrecognized_csi": false, 00:52:56.677 "method": "bdev_nvme_attach_controller", 00:52:56.677 "req_id": 1 00:52:56.677 } 00:52:56.677 Got JSON-RPC error response 00:52:56.677 response: 00:52:56.677 { 00:52:56.677 "code": -5, 00:52:56.677 "message": "Input/output error" 00:52:56.677 } 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:56.677 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:57.247 request: 00:52:57.247 { 00:52:57.247 "name": "nvme0", 00:52:57.247 "trtype": "tcp", 00:52:57.247 "traddr": "10.0.0.2", 00:52:57.247 "adrfam": "ipv4", 00:52:57.247 "trsvcid": "4420", 00:52:57.247 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:52:57.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:57.247 "prchk_reftag": false, 00:52:57.247 "prchk_guard": false, 00:52:57.247 "hdgst": false, 00:52:57.247 "ddgst": false, 00:52:57.247 "dhchap_key": "key1", 00:52:57.247 "dhchap_ctrlr_key": "ckey2", 00:52:57.247 "allow_unrecognized_csi": false, 00:52:57.247 "method": "bdev_nvme_attach_controller", 00:52:57.247 "req_id": 1 00:52:57.247 } 00:52:57.247 Got JSON-RPC error response 00:52:57.247 response: 00:52:57.247 { 00:52:57.247 "code": -5, 00:52:57.247 "message": "Input/output error" 00:52:57.247 } 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:57.247 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:57.817 request: 00:52:57.817 { 00:52:57.817 "name": "nvme0", 00:52:57.817 "trtype": "tcp", 00:52:57.817 "traddr": "10.0.0.2", 00:52:57.817 "adrfam": "ipv4", 00:52:57.817 "trsvcid": "4420", 00:52:57.817 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:52:57.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:52:57.817 "prchk_reftag": false, 00:52:57.817 "prchk_guard": false, 00:52:57.817 "hdgst": false, 00:52:57.817 "ddgst": false, 00:52:57.817 "dhchap_key": "key1", 00:52:57.817 "dhchap_ctrlr_key": "ckey1", 00:52:57.817 "allow_unrecognized_csi": false, 00:52:57.817 "method": "bdev_nvme_attach_controller", 00:52:57.817 "req_id": 1 00:52:57.817 } 00:52:57.817 Got JSON-RPC error response 00:52:57.817 response: 00:52:57.817 { 00:52:57.817 "code": -5, 00:52:57.817 "message": "Input/output error" 00:52:57.817 } 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2391362 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2391362 ']' 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2391362 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2391362 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2391362' 00:52:57.817 killing process with pid 2391362 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2391362 00:52:57.817 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2391362 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2414646 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2414646 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2414646 ']' 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:58.076 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2414646 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2414646 ']' 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:59.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:59.458 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.718 null0 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rDU 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.PKI ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PKI 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xck 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.6kc ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6kc 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8fz 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.0xq ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0xq 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DNF 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:52:59.718 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:52:59.719 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:53:00.658 nvme0n1 00:53:00.658 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:53:00.658 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:00.658 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:53:00.917 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:53:00.917 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:53:00.917 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:00.918 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:00.918 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:00.918 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:53:00.918 { 00:53:00.918 "cntlid": 1, 00:53:00.918 "qid": 0, 00:53:00.918 "state": "enabled", 00:53:00.918 "thread": "nvmf_tgt_poll_group_000", 00:53:00.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:53:00.918 "listen_address": { 00:53:00.918 "trtype": "TCP", 00:53:00.918 "adrfam": "IPv4", 00:53:00.918 "traddr": "10.0.0.2", 00:53:00.918 "trsvcid": "4420" 00:53:00.918 }, 00:53:00.918 "peer_address": { 00:53:00.918 "trtype": "TCP", 00:53:00.918 "adrfam": "IPv4", 00:53:00.918 "traddr": "10.0.0.1", 00:53:00.918 "trsvcid": "46998" 00:53:00.918 }, 00:53:00.918 "auth": { 00:53:00.918 "state": "completed", 00:53:00.918 "digest": "sha512", 00:53:00.918 "dhgroup": "ffdhe8192" 00:53:00.918 } 00:53:00.918 } 00:53:00.918 ]' 00:53:00.918 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:53:00.918 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:53:00.918 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:53:00.918 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:53:00.918 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:53:01.177 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:53:01.177 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:53:01.177 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:53:01.177 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:53:01.177 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:53:02.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:53:02.122 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:53:02.382 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:53:02.382 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:53:02.382 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:53:02.382 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:53:02.382 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:02.383 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:53:02.383 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:02.383 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:53:02.383 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:53:02.383 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:53:02.642 request: 00:53:02.642 { 00:53:02.642 "name": "nvme0", 00:53:02.642 "trtype": "tcp", 00:53:02.642 "traddr": "10.0.0.2", 00:53:02.642 "adrfam": "ipv4", 00:53:02.642 "trsvcid": "4420", 00:53:02.642 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:53:02.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:53:02.642 "prchk_reftag": false, 00:53:02.642 "prchk_guard": false, 00:53:02.643 "hdgst": false, 00:53:02.643 "ddgst": false, 00:53:02.643 "dhchap_key": "key3", 00:53:02.643 "allow_unrecognized_csi": false, 00:53:02.643 "method": "bdev_nvme_attach_controller", 00:53:02.643 "req_id": 1 00:53:02.643 } 00:53:02.643 Got JSON-RPC error response 00:53:02.643 response: 00:53:02.643 { 00:53:02.643 "code": -5, 00:53:02.643 "message": "Input/output error" 00:53:02.643 } 00:53:02.643 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:53:02.643 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:02.643 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:02.643 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:02.643 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:53:02.643 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:53:02.643 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:53:02.643 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:53:02.902 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:53:02.902 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:53:02.902 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:53:02.902 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:53:02.902 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:02.902 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:53:02.902 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:02.902 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:53:02.902 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:53:02.902 11:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:53:03.161 request: 00:53:03.161 { 00:53:03.161 "name": "nvme0", 00:53:03.161 "trtype": "tcp", 00:53:03.161 "traddr": "10.0.0.2", 00:53:03.161 "adrfam": "ipv4", 00:53:03.161 "trsvcid": "4420", 00:53:03.161 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:53:03.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:53:03.161 "prchk_reftag": false, 00:53:03.161 "prchk_guard": false, 00:53:03.161 "hdgst": false, 00:53:03.161 "ddgst": false, 00:53:03.161 "dhchap_key": "key3", 00:53:03.161 "allow_unrecognized_csi": false, 00:53:03.161 "method": "bdev_nvme_attach_controller", 00:53:03.161 "req_id": 1 00:53:03.161 } 00:53:03.161 Got JSON-RPC error response 00:53:03.161 response: 00:53:03.161 { 00:53:03.161 "code": -5, 00:53:03.161 "message": "Input/output error" 00:53:03.161 } 00:53:03.161 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:53:03.161 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:03.161 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:03.161 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:03.161 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:53:03.161 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:53:03.161 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:53:03.161 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:53:03.161 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:53:03.161 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:53:03.421 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:53:03.680 request: 00:53:03.680 { 00:53:03.680 "name": "nvme0", 00:53:03.680 "trtype": "tcp", 00:53:03.680 "traddr": "10.0.0.2", 00:53:03.680 "adrfam": "ipv4", 00:53:03.680 "trsvcid": "4420", 00:53:03.680 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:53:03.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:53:03.680 "prchk_reftag": false, 00:53:03.680 "prchk_guard": false, 00:53:03.680 "hdgst": false, 00:53:03.680 "ddgst": false, 00:53:03.680 "dhchap_key": "key0", 00:53:03.680 "dhchap_ctrlr_key": "key1", 00:53:03.680 "allow_unrecognized_csi": false, 00:53:03.680 "method": "bdev_nvme_attach_controller", 00:53:03.680 "req_id": 1 00:53:03.680 } 00:53:03.680 Got JSON-RPC error response 00:53:03.680 response: 00:53:03.680 { 00:53:03.680 "code": -5, 00:53:03.680 "message": "Input/output error" 00:53:03.680 } 00:53:03.939 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:53:03.940 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:03.940 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:03.940 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:03.940 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:53:03.940 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:53:03.940 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:53:04.198 nvme0n1 00:53:04.198 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:53:04.198 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:53:04.198 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:04.458 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:53:04.458 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:53:04.458 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:53:04.717 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 00:53:04.717 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:04.717 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:04.717 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:04.717 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:53:04.717 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:53:04.717 11:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:53:05.656 nvme0n1 00:53:05.656 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:53:05.656 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:53:05.656 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:05.916 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:53:05.916 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key key3 00:53:05.916 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:05.916 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:05.916 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:05.916 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:53:05.916 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:53:05.916 11:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:06.176 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:53:06.176 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:53:06.176 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: --dhchap-ctrl-secret DHHC-1:03:ZDFiZGJmMTIxMDJlYmE1YWM4Mjc5ZWI5OTgwZjI2N2RjOTdlYTE0NWM4MTg1MzNlNGU0OWJiNzIxOTE4MjdhNhOHn/g=: 00:53:07.115 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:53:07.115 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:53:07.115 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:53:07.115 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:53:07.115 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:53:07.115 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:53:07.115 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:53:07.115 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:53:07.115 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:53:07.115 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:53:07.115 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:53:07.115 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:53:07.115 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:53:07.115 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:07.115 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:53:07.115 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:07.115 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:53:07.115 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:53:07.115 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:53:07.684 request: 00:53:07.684 { 00:53:07.684 "name": "nvme0", 00:53:07.684 "trtype": "tcp", 00:53:07.684 "traddr": "10.0.0.2", 00:53:07.684 "adrfam": "ipv4", 00:53:07.684 "trsvcid": "4420", 00:53:07.684 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:53:07.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:53:07.684 "prchk_reftag": false, 00:53:07.684 "prchk_guard": false, 00:53:07.684 "hdgst": false, 00:53:07.684 "ddgst": false, 00:53:07.684 "dhchap_key": "key1", 00:53:07.684 "allow_unrecognized_csi": false, 00:53:07.684 "method": "bdev_nvme_attach_controller", 00:53:07.684 "req_id": 1 00:53:07.684 } 00:53:07.684 Got JSON-RPC error response 00:53:07.684 response: 00:53:07.684 { 00:53:07.684 "code": -5, 00:53:07.684 "message": "Input/output error" 00:53:07.684 } 00:53:07.684 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:53:07.684 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:07.684 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:07.684 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:07.684 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:53:07.684 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:53:07.684 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:53:08.623 nvme0n1 00:53:08.623 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:53:08.623 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:53:08.623 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:08.881 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:53:08.881 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:53:08.881 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:53:09.446 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:53:09.446 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.446 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:09.446 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.446 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:53:09.446 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:53:09.446 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:53:09.703 nvme0n1 00:53:09.703 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:53:09.703 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:09.703 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:53:09.961 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:53:09.961 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:53:09.961 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key key3 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: '' 2s 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: ]] 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjVkNzhkNjExNDA3N2I3ZDVlODZmZjA0ZTgwZGUzZjZ5igll: 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:53:10.218 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key key2 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: 2s 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: ]] 00:53:12.113 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGVmZDhkNThlZTU3Y2MxMTBlOGIwYWM4Y2NhN2I5MmU5NDQ1NjA4NzBlMzA2ZDJjg8P11w==: 00:53:12.370 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:53:12.370 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:53:14.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key key1 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:53:14.266 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:53:15.197 nvme0n1 00:53:15.197 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key key3 00:53:15.197 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:15.197 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:15.197 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:15.197 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:53:15.197 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:53:15.762 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:53:15.762 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:15.762 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:53:16.020 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:53:16.020 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:53:16.020 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:16.020 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:16.020 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:16.020 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:53:16.020 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:53:16.277 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:53:16.278 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:53:16.278 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key key3 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:53:16.535 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:53:17.100 request: 00:53:17.100 { 00:53:17.100 "name": "nvme0", 00:53:17.100 "dhchap_key": "key1", 00:53:17.100 "dhchap_ctrlr_key": "key3", 00:53:17.100 "method": "bdev_nvme_set_keys", 00:53:17.100 "req_id": 1 00:53:17.100 } 00:53:17.100 Got JSON-RPC error response 00:53:17.100 response: 00:53:17.100 { 00:53:17.100 "code": -13, 00:53:17.100 "message": "Permission denied" 00:53:17.100 } 00:53:17.100 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:53:17.100 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:17.100 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:17.100 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:17.100 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:53:17.100 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:53:17.100 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:17.664 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:53:17.664 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key key1 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:53:18.596 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:53:19.527 nvme0n1 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key key3 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:53:19.527 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:53:20.091 request: 00:53:20.091 { 00:53:20.091 "name": "nvme0", 00:53:20.091 "dhchap_key": "key2", 00:53:20.091 "dhchap_ctrlr_key": "key0", 00:53:20.091 "method": "bdev_nvme_set_keys", 00:53:20.091 "req_id": 1 00:53:20.091 } 00:53:20.091 Got JSON-RPC error response 00:53:20.091 response: 00:53:20.091 { 00:53:20.091 "code": -13, 00:53:20.091 "message": "Permission denied" 00:53:20.091 } 00:53:20.091 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:53:20.091 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:20.091 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:20.091 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:20.091 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:53:20.091 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:20.091 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:53:20.348 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:53:20.348 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2391556 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2391556 ']' 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2391556 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2391556 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2391556' 00:53:21.719 killing process with pid 2391556 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2391556 00:53:21.719 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2391556 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:22.285 rmmod nvme_tcp 00:53:22.285 rmmod nvme_fabrics 00:53:22.285 rmmod nvme_keyring 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2414646 ']' 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2414646 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2414646 ']' 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2414646 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2414646 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2414646' 00:53:22.285 killing process with pid 2414646 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2414646 00:53:22.285 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2414646 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:22.543 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rDU /tmp/spdk.key-sha256.xck /tmp/spdk.key-sha384.8fz /tmp/spdk.key-sha512.DNF /tmp/spdk.key-sha512.PKI /tmp/spdk.key-sha384.6kc /tmp/spdk.key-sha256.0xq '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:53:25.074 00:53:25.074 real 3m20.335s 00:53:25.074 user 7m41.239s 00:53:25.074 sys 0m36.455s 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:53:25.074 ************************************ 00:53:25.074 END TEST nvmf_auth_target 00:53:25.074 ************************************ 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # [[ tcp == \t\c\p ]] 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:53:25.074 ************************************ 00:53:25.074 START TEST nvmf_bdevio_no_huge 00:53:25.074 ************************************ 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:53:25.074 * Looking for test storage... 00:53:25.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:25.074 --rc genhtml_branch_coverage=1 00:53:25.074 --rc genhtml_function_coverage=1 00:53:25.074 --rc genhtml_legend=1 00:53:25.074 --rc geninfo_all_blocks=1 00:53:25.074 --rc geninfo_unexecuted_blocks=1 00:53:25.074 00:53:25.074 ' 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:25.074 --rc genhtml_branch_coverage=1 00:53:25.074 --rc genhtml_function_coverage=1 00:53:25.074 --rc genhtml_legend=1 00:53:25.074 --rc geninfo_all_blocks=1 00:53:25.074 --rc geninfo_unexecuted_blocks=1 00:53:25.074 00:53:25.074 ' 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:25.074 --rc genhtml_branch_coverage=1 00:53:25.074 --rc genhtml_function_coverage=1 00:53:25.074 --rc genhtml_legend=1 00:53:25.074 --rc geninfo_all_blocks=1 00:53:25.074 --rc geninfo_unexecuted_blocks=1 00:53:25.074 00:53:25.074 ' 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:25.074 --rc genhtml_branch_coverage=1 00:53:25.074 --rc genhtml_function_coverage=1 00:53:25.074 --rc genhtml_legend=1 00:53:25.074 --rc geninfo_all_blocks=1 00:53:25.074 --rc geninfo_unexecuted_blocks=1 00:53:25.074 00:53:25.074 ' 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:53:25.074 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:53:25.074 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:25.074 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:25.074 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:25.074 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:25.074 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:25.074 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:25.074 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:25.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:53:25.075 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:31.635 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:53:31.635 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:53:31.635 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:53:31.635 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:53:31.635 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:53:31.635 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:53:31.635 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:53:31.635 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:53:31.635 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:53:31.636 Found 0000:af:00.0 (0x8086 - 0x159b) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:53:31.636 Found 0000:af:00.1 (0x8086 - 0x159b) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:53:31.636 Found net devices under 0000:af:00.0: cvl_0_0 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:53:31.636 Found net devices under 0000:af:00.1: cvl_0_1 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:53:31.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:31.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:53:31.636 00:53:31.636 --- 10.0.0.2 ping statistics --- 00:53:31.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:31.636 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:53:31.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:31.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:53:31.636 00:53:31.636 --- 10.0.0.1 ping statistics --- 00:53:31.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:31.636 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:31.636 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2421036 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2421036 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2421036 ']' 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:31.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:31.637 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:31.637 [2024-12-09 11:04:32.602562] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:53:31.637 [2024-12-09 11:04:32.602658] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:53:31.895 [2024-12-09 11:04:32.834908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:53:31.895 [2024-12-09 11:04:32.891673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:31.895 [2024-12-09 11:04:32.891717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:31.895 [2024-12-09 11:04:32.891728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:31.895 [2024-12-09 11:04:32.891737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:31.895 [2024-12-09 11:04:32.891745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:31.895 [2024-12-09 11:04:32.892864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:53:31.895 [2024-12-09 11:04:32.892956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:53:31.895 [2024-12-09 11:04:32.893042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:31.895 [2024-12-09 11:04:32.893043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:53:32.459 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:32.459 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:53:32.459 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:32.459 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:32.459 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:32.717 [2024-12-09 11:04:33.663478] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:32.717 Malloc0 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:32.717 [2024-12-09 11:04:33.711785] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:53:32.717 { 00:53:32.717 "params": { 00:53:32.717 "name": "Nvme$subsystem", 00:53:32.717 "trtype": "$TEST_TRANSPORT", 00:53:32.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:53:32.717 "adrfam": "ipv4", 00:53:32.717 "trsvcid": "$NVMF_PORT", 00:53:32.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:53:32.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:53:32.717 "hdgst": ${hdgst:-false}, 00:53:32.717 "ddgst": ${ddgst:-false} 00:53:32.717 }, 00:53:32.717 "method": "bdev_nvme_attach_controller" 00:53:32.717 } 00:53:32.717 EOF 00:53:32.717 )") 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:53:32.717 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:53:32.717 "params": { 00:53:32.717 "name": "Nvme1", 00:53:32.717 "trtype": "tcp", 00:53:32.717 "traddr": "10.0.0.2", 00:53:32.717 "adrfam": "ipv4", 00:53:32.717 "trsvcid": "4420", 00:53:32.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:53:32.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:53:32.717 "hdgst": false, 00:53:32.717 "ddgst": false 00:53:32.717 }, 00:53:32.717 "method": "bdev_nvme_attach_controller" 00:53:32.717 }' 00:53:32.717 [2024-12-09 11:04:33.773839] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:53:32.717 [2024-12-09 11:04:33.773916] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2421237 ] 00:53:32.975 [2024-12-09 11:04:34.088341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:33.232 [2024-12-09 11:04:34.172511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:33.232 [2024-12-09 11:04:34.172597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:33.232 [2024-12-09 11:04:34.172603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:33.232 I/O targets: 00:53:33.232 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:53:33.232 00:53:33.232 00:53:33.232 CUnit - A unit testing framework for C - Version 2.1-3 00:53:33.232 http://cunit.sourceforge.net/ 00:53:33.232 00:53:33.232 00:53:33.232 Suite: bdevio tests on: Nvme1n1 00:53:33.490 Test: blockdev write read block ...passed 00:53:33.490 Test: blockdev write zeroes read block ...passed 00:53:33.490 Test: blockdev write zeroes read no split ...passed 00:53:33.490 Test: blockdev write zeroes read split ...passed 00:53:33.490 Test: blockdev write zeroes read split partial ...passed 00:53:33.490 Test: blockdev reset ...[2024-12-09 11:04:34.565058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:53:33.490 [2024-12-09 11:04:34.565144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140ae70 (9): Bad file descriptor 00:53:33.490 [2024-12-09 11:04:34.659755] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:53:33.490 passed 00:53:33.490 Test: blockdev write read 8 blocks ...passed 00:53:33.490 Test: blockdev write read size > 128k ...passed 00:53:33.490 Test: blockdev write read invalid size ...passed 00:53:33.748 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:53:33.748 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:53:33.748 Test: blockdev write read max offset ...passed 00:53:33.748 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:53:33.748 Test: blockdev writev readv 8 blocks ...passed 00:53:33.748 Test: blockdev writev readv 30 x 1block ...passed 00:53:33.748 Test: blockdev writev readv block ...passed 00:53:33.748 Test: blockdev writev readv size > 128k ...passed 00:53:33.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:53:33.748 Test: blockdev comparev and writev ...[2024-12-09 11:04:34.910488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:53:33.748 [2024-12-09 11:04:34.910523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:53:33.748 [2024-12-09 11:04:34.910541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:53:33.748 [2024-12-09 11:04:34.910552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:33.748 [2024-12-09 11:04:34.910812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:53:33.748 [2024-12-09 11:04:34.910828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:53:33.748 [2024-12-09 11:04:34.910844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:53:33.748 [2024-12-09 11:04:34.910857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:53:33.748 [2024-12-09 11:04:34.911126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:53:33.748 [2024-12-09 11:04:34.911143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:53:33.748 [2024-12-09 11:04:34.911161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:53:33.748 [2024-12-09 11:04:34.911175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:53:33.748 [2024-12-09 11:04:34.911468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:53:33.748 [2024-12-09 11:04:34.911487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:53:33.748 [2024-12-09 11:04:34.911511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:53:33.748 [2024-12-09 11:04:34.911526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:53:34.005 passed 00:53:34.005 Test: blockdev nvme passthru rw ...passed 00:53:34.005 Test: blockdev nvme passthru vendor specific ...[2024-12-09 11:04:34.993948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:53:34.005 [2024-12-09 11:04:34.993975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:53:34.005 [2024-12-09 11:04:34.994098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:53:34.006 [2024-12-09 11:04:34.994112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:53:34.006 [2024-12-09 11:04:34.994227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:53:34.006 [2024-12-09 11:04:34.994241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:53:34.006 [2024-12-09 11:04:34.994356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:53:34.006 [2024-12-09 11:04:34.994370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:53:34.006 passed 00:53:34.006 Test: blockdev nvme admin passthru ...passed 00:53:34.006 Test: blockdev copy ...passed 00:53:34.006 00:53:34.006 Run Summary: Type Total Ran Passed Failed Inactive 00:53:34.006 suites 1 1 n/a 0 0 00:53:34.006 tests 23 23 23 0 0 00:53:34.006 asserts 152 152 152 0 n/a 00:53:34.006 00:53:34.006 Elapsed time = 1.289 seconds 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:34.570 rmmod nvme_tcp 00:53:34.570 rmmod nvme_fabrics 00:53:34.570 rmmod nvme_keyring 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2421036 ']' 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2421036 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2421036 ']' 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2421036 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421036 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:53:34.570 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421036' 00:53:34.570 killing process with pid 2421036 00:53:34.827 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2421036 00:53:34.827 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2421036 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:35.084 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:53:37.613 00:53:37.613 real 0m12.472s 00:53:37.613 user 0m17.911s 00:53:37.613 sys 0m6.139s 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:53:37.613 ************************************ 00:53:37.613 END TEST nvmf_bdevio_no_huge 00:53:37.613 ************************************ 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # '[' tcp = tcp ']' 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:53:37.613 ************************************ 00:53:37.613 START TEST nvmf_tls 00:53:37.613 ************************************ 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:53:37.613 * Looking for test storage... 00:53:37.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:37.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.613 --rc genhtml_branch_coverage=1 00:53:37.613 --rc genhtml_function_coverage=1 00:53:37.613 --rc genhtml_legend=1 00:53:37.613 --rc geninfo_all_blocks=1 00:53:37.613 --rc geninfo_unexecuted_blocks=1 00:53:37.613 00:53:37.613 ' 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:37.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.613 --rc genhtml_branch_coverage=1 00:53:37.613 --rc genhtml_function_coverage=1 00:53:37.613 --rc genhtml_legend=1 00:53:37.613 --rc geninfo_all_blocks=1 00:53:37.613 --rc geninfo_unexecuted_blocks=1 00:53:37.613 00:53:37.613 ' 00:53:37.613 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:37.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.614 --rc genhtml_branch_coverage=1 00:53:37.614 --rc genhtml_function_coverage=1 00:53:37.614 --rc genhtml_legend=1 00:53:37.614 --rc geninfo_all_blocks=1 00:53:37.614 --rc geninfo_unexecuted_blocks=1 00:53:37.614 00:53:37.614 ' 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:37.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.614 --rc genhtml_branch_coverage=1 00:53:37.614 --rc genhtml_function_coverage=1 00:53:37.614 --rc genhtml_legend=1 00:53:37.614 --rc geninfo_all_blocks=1 00:53:37.614 --rc geninfo_unexecuted_blocks=1 00:53:37.614 00:53:37.614 ' 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:37.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:53:37.614 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:53:44.174 Found 0000:af:00.0 (0x8086 - 0x159b) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:53:44.174 Found 0000:af:00.1 (0x8086 - 0x159b) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:53:44.174 Found net devices under 0000:af:00.0: cvl_0_0 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:53:44.174 Found net devices under 0000:af:00.1: cvl_0_1 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:53:44.174 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:53:44.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:44.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:53:44.175 00:53:44.175 --- 10.0.0.2 ping statistics --- 00:53:44.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:44.175 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:53:44.175 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:53:44.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:44.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:53:44.175 00:53:44.175 --- 10.0.0.1 ping statistics --- 00:53:44.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:44.175 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2424655 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2424655 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2424655 ']' 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:44.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:44.175 [2024-12-09 11:04:45.104925] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:53:44.175 [2024-12-09 11:04:45.105000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:44.175 [2024-12-09 11:04:45.209982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:44.175 [2024-12-09 11:04:45.256332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:44.175 [2024-12-09 11:04:45.256371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:44.175 [2024-12-09 11:04:45.256382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:44.175 [2024-12-09 11:04:45.256391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:44.175 [2024-12-09 11:04:45.256400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:44.175 [2024-12-09 11:04:45.256901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:44.175 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:44.432 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:44.432 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:53:44.432 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:53:44.689 true 00:53:44.689 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:53:44.689 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:53:44.947 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:53:44.947 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:53:44.947 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:53:45.205 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:53:45.205 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:53:45.462 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:53:45.462 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:53:45.462 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:53:45.720 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:53:45.720 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:53:45.978 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:53:45.978 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:53:45.978 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:53:45.978 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:53:46.236 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:53:46.236 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:53:46.236 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:53:46.493 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:53:46.493 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:53:46.494 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:53:46.494 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:53:46.494 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:53:47.058 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:53:47.058 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:53:47.058 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:53:47.058 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:53:47.058 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:53:47.058 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:53:47.058 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:53:47.058 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:53:47.058 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:53:47.058 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:53:47.058 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.jwCSCfxwAu 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.MetAXVHpZn 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.jwCSCfxwAu 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.MetAXVHpZn 00:53:47.316 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:53:47.574 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:53:47.832 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.jwCSCfxwAu 00:53:47.832 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jwCSCfxwAu 00:53:47.832 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:53:48.089 [2024-12-09 11:04:49.029597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:48.089 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:53:48.347 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:53:48.347 [2024-12-09 11:04:49.486759] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:53:48.347 [2024-12-09 11:04:49.486970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:48.347 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:53:48.604 malloc0 00:53:48.604 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:53:48.862 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jwCSCfxwAu 00:53:49.120 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:53:49.377 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.jwCSCfxwAu 00:54:01.570 Initializing NVMe Controllers 00:54:01.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:54:01.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:54:01.570 Initialization complete. Launching workers. 00:54:01.570 ======================================================== 00:54:01.570 Latency(us) 00:54:01.570 Device Information : IOPS MiB/s Average min max 00:54:01.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15226.89 59.48 4202.73 1565.77 44438.97 00:54:01.570 ======================================================== 00:54:01.570 Total : 15226.89 59.48 4202.73 1565.77 44438.97 00:54:01.570 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jwCSCfxwAu 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jwCSCfxwAu 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2426660 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2426660 /var/tmp/bdevperf.sock 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2426660 ']' 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:01.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:01.570 [2024-12-09 11:05:00.659078] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:01.570 [2024-12-09 11:05:00.659136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426660 ] 00:54:01.570 [2024-12-09 11:05:00.738839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:01.570 [2024-12-09 11:05:00.782351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:01.570 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jwCSCfxwAu 00:54:01.570 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:54:01.570 [2024-12-09 11:05:01.452603] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:01.570 TLSTESTn1 00:54:01.570 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:54:01.570 Running I/O for 10 seconds... 00:54:02.504 4877.00 IOPS, 19.05 MiB/s [2024-12-09T10:05:05.052Z] 4703.50 IOPS, 18.37 MiB/s [2024-12-09T10:05:05.983Z] 4736.67 IOPS, 18.50 MiB/s [2024-12-09T10:05:06.917Z] 4827.00 IOPS, 18.86 MiB/s [2024-12-09T10:05:07.850Z] 4884.00 IOPS, 19.08 MiB/s [2024-12-09T10:05:08.782Z] 4840.83 IOPS, 18.91 MiB/s [2024-12-09T10:05:09.715Z] 4857.86 IOPS, 18.98 MiB/s [2024-12-09T10:05:11.088Z] 4854.75 IOPS, 18.96 MiB/s [2024-12-09T10:05:12.022Z] 4875.89 IOPS, 19.05 MiB/s [2024-12-09T10:05:12.022Z] 4893.40 IOPS, 19.11 MiB/s 00:54:10.846 Latency(us) 00:54:10.846 [2024-12-09T10:05:12.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:10.846 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:54:10.846 Verification LBA range: start 0x0 length 0x2000 00:54:10.846 TLSTESTn1 : 10.01 4899.30 19.14 0.00 0.00 26090.03 5955.23 32824.99 00:54:10.846 [2024-12-09T10:05:12.022Z] =================================================================================================================== 00:54:10.846 [2024-12-09T10:05:12.022Z] Total : 4899.30 19.14 0.00 0.00 26090.03 5955.23 32824.99 00:54:10.846 { 00:54:10.846 "results": [ 00:54:10.846 { 00:54:10.846 "job": "TLSTESTn1", 00:54:10.846 "core_mask": "0x4", 00:54:10.846 "workload": "verify", 00:54:10.846 "status": "finished", 00:54:10.846 "verify_range": { 00:54:10.846 "start": 0, 00:54:10.846 "length": 8192 00:54:10.846 }, 00:54:10.846 "queue_depth": 128, 00:54:10.846 "io_size": 4096, 00:54:10.846 "runtime": 10.013874, 00:54:10.846 "iops": 4899.302707423721, 00:54:10.846 "mibps": 19.13790120087391, 00:54:10.846 "io_failed": 0, 00:54:10.846 "io_timeout": 0, 00:54:10.846 "avg_latency_us": 26090.028579363934, 00:54:10.846 "min_latency_us": 5955.227826086956, 00:54:10.846 "max_latency_us": 32824.98782608696 00:54:10.846 } 00:54:10.846 ], 00:54:10.846 "core_count": 1 00:54:10.846 } 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2426660 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2426660 ']' 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2426660 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426660 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426660' 00:54:10.846 killing process with pid 2426660 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2426660 00:54:10.846 Received shutdown signal, test time was about 10.000000 seconds 00:54:10.846 00:54:10.846 Latency(us) 00:54:10.846 [2024-12-09T10:05:12.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:10.846 [2024-12-09T10:05:12.022Z] =================================================================================================================== 00:54:10.846 [2024-12-09T10:05:12.022Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2426660 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MetAXVHpZn 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MetAXVHpZn 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MetAXVHpZn 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MetAXVHpZn 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2428076 00:54:10.846 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:10.847 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:54:10.847 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2428076 /var/tmp/bdevperf.sock 00:54:10.847 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2428076 ']' 00:54:10.847 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:10.847 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:10.847 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:10.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:10.847 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:10.847 11:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:11.105 [2024-12-09 11:05:12.038623] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:11.105 [2024-12-09 11:05:12.038727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428076 ] 00:54:11.105 [2024-12-09 11:05:12.136877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:11.105 [2024-12-09 11:05:12.178934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:11.105 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:11.105 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:11.105 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MetAXVHpZn 00:54:11.670 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:54:11.670 [2024-12-09 11:05:12.800542] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:11.670 [2024-12-09 11:05:12.810702] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:54:11.670 [2024-12-09 11:05:12.811048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25075c0 (107): Transport endpoint is not connected 00:54:11.670 [2024-12-09 11:05:12.812040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25075c0 (9): Bad file descriptor 00:54:11.670 [2024-12-09 11:05:12.813041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:54:11.670 [2024-12-09 11:05:12.813055] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:54:11.670 [2024-12-09 11:05:12.813065] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:54:11.670 [2024-12-09 11:05:12.813080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:54:11.670 request: 00:54:11.670 { 00:54:11.670 "name": "TLSTEST", 00:54:11.670 "trtype": "tcp", 00:54:11.670 "traddr": "10.0.0.2", 00:54:11.670 "adrfam": "ipv4", 00:54:11.670 "trsvcid": "4420", 00:54:11.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:11.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:11.670 "prchk_reftag": false, 00:54:11.670 "prchk_guard": false, 00:54:11.670 "hdgst": false, 00:54:11.670 "ddgst": false, 00:54:11.670 "psk": "key0", 00:54:11.670 "allow_unrecognized_csi": false, 00:54:11.670 "method": "bdev_nvme_attach_controller", 00:54:11.670 "req_id": 1 00:54:11.670 } 00:54:11.670 Got JSON-RPC error response 00:54:11.670 response: 00:54:11.670 { 00:54:11.670 "code": -5, 00:54:11.670 "message": "Input/output error" 00:54:11.670 } 00:54:11.670 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2428076 00:54:11.670 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2428076 ']' 00:54:11.670 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2428076 00:54:11.670 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:11.670 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:11.670 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2428076 00:54:11.928 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:11.928 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:11.928 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2428076' 00:54:11.928 killing process with pid 2428076 00:54:11.928 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2428076 00:54:11.928 Received shutdown signal, test time was about 10.000000 seconds 00:54:11.928 00:54:11.928 Latency(us) 00:54:11.928 [2024-12-09T10:05:13.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:11.928 [2024-12-09T10:05:13.104Z] =================================================================================================================== 00:54:11.928 [2024-12-09T10:05:13.104Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:54:11.928 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2428076 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jwCSCfxwAu 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jwCSCfxwAu 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jwCSCfxwAu 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jwCSCfxwAu 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2428260 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2428260 /var/tmp/bdevperf.sock 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2428260 ']' 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:12.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:12.186 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:12.186 [2024-12-09 11:05:13.172115] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:12.186 [2024-12-09 11:05:13.172200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428260 ] 00:54:12.186 [2024-12-09 11:05:13.270629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:12.186 [2024-12-09 11:05:13.314890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:12.444 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:12.444 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:12.444 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jwCSCfxwAu 00:54:12.701 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:54:12.960 [2024-12-09 11:05:13.945589] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:12.960 [2024-12-09 11:05:13.953408] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:54:12.960 [2024-12-09 11:05:13.953433] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:54:12.960 [2024-12-09 11:05:13.953461] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:54:12.960 [2024-12-09 11:05:13.954000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa25c0 (107): Transport endpoint is not connected 00:54:12.960 [2024-12-09 11:05:13.954993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa25c0 (9): Bad file descriptor 00:54:12.960 [2024-12-09 11:05:13.955994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:54:12.960 [2024-12-09 11:05:13.956008] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:54:12.960 [2024-12-09 11:05:13.956018] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:54:12.960 [2024-12-09 11:05:13.956032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:54:12.960 request: 00:54:12.960 { 00:54:12.960 "name": "TLSTEST", 00:54:12.960 "trtype": "tcp", 00:54:12.960 "traddr": "10.0.0.2", 00:54:12.960 "adrfam": "ipv4", 00:54:12.960 "trsvcid": "4420", 00:54:12.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:12.960 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:54:12.960 "prchk_reftag": false, 00:54:12.960 "prchk_guard": false, 00:54:12.960 "hdgst": false, 00:54:12.960 "ddgst": false, 00:54:12.960 "psk": "key0", 00:54:12.960 "allow_unrecognized_csi": false, 00:54:12.960 "method": "bdev_nvme_attach_controller", 00:54:12.960 "req_id": 1 00:54:12.960 } 00:54:12.960 Got JSON-RPC error response 00:54:12.960 response: 00:54:12.960 { 00:54:12.960 "code": -5, 00:54:12.960 "message": "Input/output error" 00:54:12.960 } 00:54:12.960 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2428260 00:54:12.960 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2428260 ']' 00:54:12.960 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2428260 00:54:12.960 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:12.960 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:12.960 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2428260 00:54:12.960 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:12.960 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:12.960 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2428260' 00:54:12.960 killing process with pid 2428260 00:54:12.960 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2428260 00:54:12.960 Received shutdown signal, test time was about 10.000000 seconds 00:54:12.960 00:54:12.960 Latency(us) 00:54:12.960 [2024-12-09T10:05:14.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:12.960 [2024-12-09T10:05:14.136Z] =================================================================================================================== 00:54:12.960 [2024-12-09T10:05:14.136Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:54:12.960 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2428260 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jwCSCfxwAu 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jwCSCfxwAu 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jwCSCfxwAu 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jwCSCfxwAu 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2428439 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2428439 /var/tmp/bdevperf.sock 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2428439 ']' 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:13.218 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:13.219 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:13.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:13.219 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:13.219 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:13.219 [2024-12-09 11:05:14.301925] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:13.219 [2024-12-09 11:05:14.302008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428439 ] 00:54:13.476 [2024-12-09 11:05:14.399418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:13.476 [2024-12-09 11:05:14.442792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:13.476 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:13.476 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:13.476 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jwCSCfxwAu 00:54:13.734 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:54:13.734 [2024-12-09 11:05:14.904591] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:13.992 [2024-12-09 11:05:14.915847] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:54:13.992 [2024-12-09 11:05:14.915873] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:54:13.992 [2024-12-09 11:05:14.915902] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:54:13.992 [2024-12-09 11:05:14.916291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23695c0 (107): Transport endpoint is not connected 00:54:13.992 [2024-12-09 11:05:14.917280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23695c0 (9): Bad file descriptor 00:54:13.992 [2024-12-09 11:05:14.918282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:54:13.992 [2024-12-09 11:05:14.918296] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:54:13.992 [2024-12-09 11:05:14.918307] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:54:13.992 [2024-12-09 11:05:14.918322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:54:13.992 request: 00:54:13.992 { 00:54:13.992 "name": "TLSTEST", 00:54:13.992 "trtype": "tcp", 00:54:13.992 "traddr": "10.0.0.2", 00:54:13.992 "adrfam": "ipv4", 00:54:13.992 "trsvcid": "4420", 00:54:13.992 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:54:13.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:13.992 "prchk_reftag": false, 00:54:13.992 "prchk_guard": false, 00:54:13.992 "hdgst": false, 00:54:13.992 "ddgst": false, 00:54:13.992 "psk": "key0", 00:54:13.993 "allow_unrecognized_csi": false, 00:54:13.993 "method": "bdev_nvme_attach_controller", 00:54:13.993 "req_id": 1 00:54:13.993 } 00:54:13.993 Got JSON-RPC error response 00:54:13.993 response: 00:54:13.993 { 00:54:13.993 "code": -5, 00:54:13.993 "message": "Input/output error" 00:54:13.993 } 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2428439 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2428439 ']' 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2428439 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2428439 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2428439' 00:54:13.993 killing process with pid 2428439 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2428439 00:54:13.993 Received shutdown signal, test time was about 10.000000 seconds 00:54:13.993 00:54:13.993 Latency(us) 00:54:13.993 [2024-12-09T10:05:15.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:13.993 [2024-12-09T10:05:15.169Z] =================================================================================================================== 00:54:13.993 [2024-12-09T10:05:15.169Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:54:13.993 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2428439 00:54:14.250 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:54:14.250 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:54:14.250 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2428621 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2428621 /var/tmp/bdevperf.sock 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2428621 ']' 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:14.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:14.251 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:14.251 [2024-12-09 11:05:15.258332] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:14.251 [2024-12-09 11:05:15.258414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428621 ] 00:54:14.251 [2024-12-09 11:05:15.354452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:14.251 [2024-12-09 11:05:15.395257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:14.509 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:14.509 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:14.509 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:54:14.509 [2024-12-09 11:05:15.668307] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:54:14.509 [2024-12-09 11:05:15.668342] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:54:14.509 request: 00:54:14.509 { 00:54:14.509 "name": "key0", 00:54:14.509 "path": "", 00:54:14.509 "method": "keyring_file_add_key", 00:54:14.509 "req_id": 1 00:54:14.509 } 00:54:14.509 Got JSON-RPC error response 00:54:14.509 response: 00:54:14.509 { 00:54:14.509 "code": -1, 00:54:14.509 "message": "Operation not permitted" 00:54:14.509 } 00:54:14.766 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:54:14.766 [2024-12-09 11:05:15.864916] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:14.766 [2024-12-09 11:05:15.864959] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:54:14.766 request: 00:54:14.766 { 00:54:14.766 "name": "TLSTEST", 00:54:14.766 "trtype": "tcp", 00:54:14.766 "traddr": "10.0.0.2", 00:54:14.766 "adrfam": "ipv4", 00:54:14.766 "trsvcid": "4420", 00:54:14.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:14.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:14.766 "prchk_reftag": false, 00:54:14.766 "prchk_guard": false, 00:54:14.766 "hdgst": false, 00:54:14.766 "ddgst": false, 00:54:14.766 "psk": "key0", 00:54:14.766 "allow_unrecognized_csi": false, 00:54:14.766 "method": "bdev_nvme_attach_controller", 00:54:14.766 "req_id": 1 00:54:14.766 } 00:54:14.766 Got JSON-RPC error response 00:54:14.766 response: 00:54:14.766 { 00:54:14.766 "code": -126, 00:54:14.766 "message": "Required key not available" 00:54:14.766 } 00:54:14.766 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2428621 00:54:14.766 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2428621 ']' 00:54:14.766 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2428621 00:54:14.766 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:14.766 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:14.766 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2428621 00:54:15.025 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:15.025 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:15.025 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2428621' 00:54:15.025 killing process with pid 2428621 00:54:15.025 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2428621 00:54:15.025 Received shutdown signal, test time was about 10.000000 seconds 00:54:15.025 00:54:15.025 Latency(us) 00:54:15.025 [2024-12-09T10:05:16.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:15.025 [2024-12-09T10:05:16.201Z] =================================================================================================================== 00:54:15.025 [2024-12-09T10:05:16.201Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:54:15.025 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2428621 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2424655 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2424655 ']' 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2424655 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:15.025 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2424655 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2424655' 00:54:15.283 killing process with pid 2424655 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2424655 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2424655 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:54:15.283 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.rociq3ElhS 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.rociq3ElhS 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2428813 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2428813 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2428813 ']' 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:15.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:15.541 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:15.541 [2024-12-09 11:05:16.584044] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:15.541 [2024-12-09 11:05:16.584132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:15.541 [2024-12-09 11:05:16.687996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:15.800 [2024-12-09 11:05:16.733738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:15.800 [2024-12-09 11:05:16.733778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:15.800 [2024-12-09 11:05:16.733789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:15.800 [2024-12-09 11:05:16.733799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:15.800 [2024-12-09 11:05:16.733806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:15.800 [2024-12-09 11:05:16.734290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:16.365 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:16.365 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:16.365 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:16.365 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:16.365 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:16.365 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:16.365 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.rociq3ElhS 00:54:16.365 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rociq3ElhS 00:54:16.365 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:54:16.623 [2024-12-09 11:05:17.731119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:16.623 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:54:16.880 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:54:17.138 [2024-12-09 11:05:18.132124] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:54:17.138 [2024-12-09 11:05:18.132328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:17.138 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:54:17.395 malloc0 00:54:17.395 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:54:17.395 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rociq3ElhS 00:54:17.653 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rociq3ElhS 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rociq3ElhS 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2429080 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2429080 /var/tmp/bdevperf.sock 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2429080 ']' 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:17.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:17.911 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:17.911 [2024-12-09 11:05:19.068764] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:17.911 [2024-12-09 11:05:19.068844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429080 ] 00:54:18.169 [2024-12-09 11:05:19.165881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:18.169 [2024-12-09 11:05:19.212182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:18.169 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:18.169 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:18.169 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rociq3ElhS 00:54:18.427 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:54:18.685 [2024-12-09 11:05:19.662677] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:18.685 TLSTESTn1 00:54:18.685 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:54:18.942 Running I/O for 10 seconds... 00:54:20.807 4454.00 IOPS, 17.40 MiB/s [2024-12-09T10:05:22.917Z] 4800.00 IOPS, 18.75 MiB/s [2024-12-09T10:05:24.323Z] 4904.67 IOPS, 19.16 MiB/s [2024-12-09T10:05:24.889Z] 4940.50 IOPS, 19.30 MiB/s [2024-12-09T10:05:26.261Z] 4955.00 IOPS, 19.36 MiB/s [2024-12-09T10:05:27.195Z] 4737.83 IOPS, 18.51 MiB/s [2024-12-09T10:05:28.128Z] 4772.29 IOPS, 18.64 MiB/s [2024-12-09T10:05:29.062Z] 4796.50 IOPS, 18.74 MiB/s [2024-12-09T10:05:29.995Z] 4823.11 IOPS, 18.84 MiB/s [2024-12-09T10:05:29.995Z] 4791.20 IOPS, 18.72 MiB/s 00:54:28.819 Latency(us) 00:54:28.819 [2024-12-09T10:05:29.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:28.819 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:54:28.819 Verification LBA range: start 0x0 length 0x2000 00:54:28.819 TLSTESTn1 : 10.01 4797.16 18.74 0.00 0.00 26646.02 5727.28 32369.09 00:54:28.819 [2024-12-09T10:05:29.995Z] =================================================================================================================== 00:54:28.819 [2024-12-09T10:05:29.995Z] Total : 4797.16 18.74 0.00 0.00 26646.02 5727.28 32369.09 00:54:28.819 { 00:54:28.819 "results": [ 00:54:28.819 { 00:54:28.819 "job": "TLSTESTn1", 00:54:28.819 "core_mask": "0x4", 00:54:28.819 "workload": "verify", 00:54:28.819 "status": "finished", 00:54:28.819 "verify_range": { 00:54:28.819 "start": 0, 00:54:28.819 "length": 8192 00:54:28.819 }, 00:54:28.819 "queue_depth": 128, 00:54:28.819 "io_size": 4096, 00:54:28.819 "runtime": 10.014044, 00:54:28.819 "iops": 4797.162864473134, 00:54:28.819 "mibps": 18.738917439348178, 00:54:28.819 "io_failed": 0, 00:54:28.819 "io_timeout": 0, 00:54:28.819 "avg_latency_us": 26646.016143188008, 00:54:28.819 "min_latency_us": 5727.276521739131, 00:54:28.819 "max_latency_us": 32369.085217391304 00:54:28.819 } 00:54:28.819 ], 00:54:28.819 "core_count": 1 00:54:28.819 } 00:54:28.819 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:28.819 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2429080 00:54:28.819 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2429080 ']' 00:54:28.819 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2429080 00:54:28.819 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:28.819 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:28.819 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2429080 00:54:29.078 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:29.078 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:29.078 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2429080' 00:54:29.078 killing process with pid 2429080 00:54:29.078 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2429080 00:54:29.078 Received shutdown signal, test time was about 10.000000 seconds 00:54:29.078 00:54:29.078 Latency(us) 00:54:29.078 [2024-12-09T10:05:30.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:29.078 [2024-12-09T10:05:30.254Z] =================================================================================================================== 00:54:29.078 [2024-12-09T10:05:30.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:54:29.078 11:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2429080 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.rociq3ElhS 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rociq3ElhS 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rociq3ElhS 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rociq3ElhS 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rociq3ElhS 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2430461 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2430461 /var/tmp/bdevperf.sock 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2430461 ']' 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:29.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:29.078 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:29.336 [2024-12-09 11:05:30.272503] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:29.337 [2024-12-09 11:05:30.272585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430461 ] 00:54:29.337 [2024-12-09 11:05:30.370841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:29.337 [2024-12-09 11:05:30.412674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:29.337 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:29.337 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:29.337 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rociq3ElhS 00:54:29.594 [2024-12-09 11:05:30.753420] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rociq3ElhS': 0100666 00:54:29.594 [2024-12-09 11:05:30.753459] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:54:29.594 request: 00:54:29.594 { 00:54:29.594 "name": "key0", 00:54:29.594 "path": "/tmp/tmp.rociq3ElhS", 00:54:29.594 "method": "keyring_file_add_key", 00:54:29.594 "req_id": 1 00:54:29.594 } 00:54:29.594 Got JSON-RPC error response 00:54:29.594 response: 00:54:29.594 { 00:54:29.594 "code": -1, 00:54:29.594 "message": "Operation not permitted" 00:54:29.594 } 00:54:29.852 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:54:30.110 [2024-12-09 11:05:31.030229] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:30.110 [2024-12-09 11:05:31.030280] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:54:30.110 request: 00:54:30.110 { 00:54:30.110 "name": "TLSTEST", 00:54:30.110 "trtype": "tcp", 00:54:30.110 "traddr": "10.0.0.2", 00:54:30.110 "adrfam": "ipv4", 00:54:30.110 "trsvcid": "4420", 00:54:30.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:30.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:30.110 "prchk_reftag": false, 00:54:30.110 "prchk_guard": false, 00:54:30.110 "hdgst": false, 00:54:30.110 "ddgst": false, 00:54:30.110 "psk": "key0", 00:54:30.110 "allow_unrecognized_csi": false, 00:54:30.110 "method": "bdev_nvme_attach_controller", 00:54:30.110 "req_id": 1 00:54:30.110 } 00:54:30.110 Got JSON-RPC error response 00:54:30.110 response: 00:54:30.110 { 00:54:30.110 "code": -126, 00:54:30.110 "message": "Required key not available" 00:54:30.110 } 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2430461 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2430461 ']' 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2430461 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2430461 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2430461' 00:54:30.110 killing process with pid 2430461 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2430461 00:54:30.110 Received shutdown signal, test time was about 10.000000 seconds 00:54:30.110 00:54:30.110 Latency(us) 00:54:30.110 [2024-12-09T10:05:31.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:30.110 [2024-12-09T10:05:31.286Z] =================================================================================================================== 00:54:30.110 [2024-12-09T10:05:31.286Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:54:30.110 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2430461 00:54:30.368 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:54:30.368 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:54:30.368 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:30.368 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:30.368 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:30.368 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2428813 00:54:30.368 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2428813 ']' 00:54:30.369 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2428813 00:54:30.369 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:30.369 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:30.369 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2428813 00:54:30.369 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:30.369 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:30.369 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2428813' 00:54:30.369 killing process with pid 2428813 00:54:30.369 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2428813 00:54:30.369 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2428813 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2430662 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2430662 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2430662 ']' 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:30.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:30.627 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:30.627 [2024-12-09 11:05:31.696480] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:30.627 [2024-12-09 11:05:31.696562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:30.627 [2024-12-09 11:05:31.799827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:30.885 [2024-12-09 11:05:31.844854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:30.885 [2024-12-09 11:05:31.844899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:30.885 [2024-12-09 11:05:31.844910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:30.885 [2024-12-09 11:05:31.844920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:30.885 [2024-12-09 11:05:31.844928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:30.885 [2024-12-09 11:05:31.845454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:30.885 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:30.885 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:30.885 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:30.885 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:30.885 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.rociq3ElhS 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rociq3ElhS 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.rociq3ElhS 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rociq3ElhS 00:54:30.885 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:54:31.142 [2024-12-09 11:05:32.206637] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:31.142 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:54:31.399 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:54:31.657 [2024-12-09 11:05:32.591607] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:54:31.657 [2024-12-09 11:05:32.591811] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:31.657 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:54:31.657 malloc0 00:54:31.657 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:54:31.914 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rociq3ElhS 00:54:32.172 [2024-12-09 11:05:33.165364] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rociq3ElhS': 0100666 00:54:32.172 [2024-12-09 11:05:33.165403] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:54:32.172 request: 00:54:32.172 { 00:54:32.172 "name": "key0", 00:54:32.172 "path": "/tmp/tmp.rociq3ElhS", 00:54:32.172 "method": "keyring_file_add_key", 00:54:32.172 "req_id": 1 00:54:32.172 } 00:54:32.172 Got JSON-RPC error response 00:54:32.172 response: 00:54:32.172 { 00:54:32.172 "code": -1, 00:54:32.172 "message": "Operation not permitted" 00:54:32.172 } 00:54:32.172 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:54:32.430 [2024-12-09 11:05:33.442098] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:54:32.430 [2024-12-09 11:05:33.442142] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:54:32.430 request: 00:54:32.430 { 00:54:32.430 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:32.430 "host": "nqn.2016-06.io.spdk:host1", 00:54:32.430 "psk": "key0", 00:54:32.430 "method": "nvmf_subsystem_add_host", 00:54:32.430 "req_id": 1 00:54:32.430 } 00:54:32.430 Got JSON-RPC error response 00:54:32.430 response: 00:54:32.430 { 00:54:32.430 "code": -32603, 00:54:32.430 "message": "Internal error" 00:54:32.430 } 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2430662 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2430662 ']' 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2430662 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2430662 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2430662' 00:54:32.430 killing process with pid 2430662 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2430662 00:54:32.430 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2430662 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.rociq3ElhS 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2431025 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2431025 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2431025 ']' 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:32.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:32.688 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:32.688 [2024-12-09 11:05:33.830370] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:32.688 [2024-12-09 11:05:33.830451] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:32.946 [2024-12-09 11:05:33.932489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:32.946 [2024-12-09 11:05:33.975031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:32.946 [2024-12-09 11:05:33.975073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:32.946 [2024-12-09 11:05:33.975084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:32.946 [2024-12-09 11:05:33.975094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:32.946 [2024-12-09 11:05:33.975102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:32.946 [2024-12-09 11:05:33.975583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:32.946 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:32.946 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:32.946 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:32.946 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:32.946 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:32.946 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:32.946 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.rociq3ElhS 00:54:32.946 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rociq3ElhS 00:54:32.946 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:54:33.204 [2024-12-09 11:05:34.377242] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:33.461 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:54:33.719 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:54:33.976 [2024-12-09 11:05:34.930622] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:54:33.977 [2024-12-09 11:05:34.930833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:33.977 11:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:54:33.977 malloc0 00:54:33.977 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:54:34.233 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rociq3ElhS 00:54:34.491 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:54:34.749 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:54:34.749 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2431238 00:54:34.749 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:34.749 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2431238 /var/tmp/bdevperf.sock 00:54:34.749 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2431238 ']' 00:54:34.749 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:34.749 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:34.749 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:34.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:34.749 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:34.749 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:34.749 [2024-12-09 11:05:35.754797] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:34.749 [2024-12-09 11:05:35.754862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431238 ] 00:54:34.749 [2024-12-09 11:05:35.836908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:34.749 [2024-12-09 11:05:35.878452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:35.007 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:35.007 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:35.007 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rociq3ElhS 00:54:35.265 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:54:35.522 [2024-12-09 11:05:36.536571] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:35.522 TLSTESTn1 00:54:35.522 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:54:36.107 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:54:36.107 "subsystems": [ 00:54:36.107 { 00:54:36.107 "subsystem": "keyring", 00:54:36.107 "config": [ 00:54:36.107 { 00:54:36.107 "method": "keyring_file_add_key", 00:54:36.107 "params": { 00:54:36.107 "name": "key0", 00:54:36.107 "path": "/tmp/tmp.rociq3ElhS" 00:54:36.107 } 00:54:36.107 } 00:54:36.107 ] 00:54:36.107 }, 00:54:36.107 { 00:54:36.107 "subsystem": "iobuf", 00:54:36.107 "config": [ 00:54:36.107 { 00:54:36.107 "method": "iobuf_set_options", 00:54:36.107 "params": { 00:54:36.107 "small_pool_count": 8192, 00:54:36.107 "large_pool_count": 1024, 00:54:36.107 "small_bufsize": 8192, 00:54:36.107 "large_bufsize": 135168, 00:54:36.107 "enable_numa": false 00:54:36.107 } 00:54:36.107 } 00:54:36.107 ] 00:54:36.107 }, 00:54:36.107 { 00:54:36.107 "subsystem": "sock", 00:54:36.107 "config": [ 00:54:36.107 { 00:54:36.107 "method": "sock_set_default_impl", 00:54:36.107 "params": { 00:54:36.107 "impl_name": "posix" 00:54:36.107 } 00:54:36.107 }, 00:54:36.107 { 00:54:36.107 "method": "sock_impl_set_options", 00:54:36.107 "params": { 00:54:36.107 "impl_name": "ssl", 00:54:36.107 "recv_buf_size": 4096, 00:54:36.107 "send_buf_size": 4096, 00:54:36.107 "enable_recv_pipe": true, 00:54:36.107 "enable_quickack": false, 00:54:36.107 "enable_placement_id": 0, 00:54:36.107 "enable_zerocopy_send_server": true, 00:54:36.107 "enable_zerocopy_send_client": false, 00:54:36.107 "zerocopy_threshold": 0, 00:54:36.107 "tls_version": 0, 00:54:36.107 "enable_ktls": false 00:54:36.107 } 00:54:36.107 }, 00:54:36.107 { 00:54:36.107 "method": "sock_impl_set_options", 00:54:36.107 "params": { 00:54:36.107 "impl_name": "posix", 00:54:36.107 "recv_buf_size": 2097152, 00:54:36.107 "send_buf_size": 2097152, 00:54:36.107 "enable_recv_pipe": true, 00:54:36.107 "enable_quickack": false, 00:54:36.107 "enable_placement_id": 0, 00:54:36.107 "enable_zerocopy_send_server": true, 00:54:36.107 "enable_zerocopy_send_client": false, 00:54:36.107 "zerocopy_threshold": 0, 00:54:36.107 "tls_version": 0, 00:54:36.107 "enable_ktls": false 00:54:36.107 } 00:54:36.107 } 00:54:36.107 ] 00:54:36.107 }, 00:54:36.107 { 00:54:36.107 "subsystem": "vmd", 00:54:36.107 "config": [] 00:54:36.107 }, 00:54:36.107 { 00:54:36.107 "subsystem": "accel", 00:54:36.107 "config": [ 00:54:36.107 { 00:54:36.107 "method": "accel_set_options", 00:54:36.107 "params": { 00:54:36.107 "small_cache_size": 128, 00:54:36.107 "large_cache_size": 16, 00:54:36.107 "task_count": 2048, 00:54:36.107 "sequence_count": 2048, 00:54:36.107 "buf_count": 2048 00:54:36.107 } 00:54:36.107 } 00:54:36.107 ] 00:54:36.107 }, 00:54:36.107 { 00:54:36.107 "subsystem": "bdev", 00:54:36.107 "config": [ 00:54:36.107 { 00:54:36.107 "method": "bdev_set_options", 00:54:36.107 "params": { 00:54:36.107 "bdev_io_pool_size": 65535, 00:54:36.107 "bdev_io_cache_size": 256, 00:54:36.107 "bdev_auto_examine": true, 00:54:36.107 "iobuf_small_cache_size": 128, 00:54:36.107 "iobuf_large_cache_size": 16 00:54:36.107 } 00:54:36.107 }, 00:54:36.107 { 00:54:36.107 "method": "bdev_raid_set_options", 00:54:36.107 "params": { 00:54:36.107 "process_window_size_kb": 1024, 00:54:36.107 "process_max_bandwidth_mb_sec": 0 00:54:36.107 } 00:54:36.107 }, 00:54:36.107 { 00:54:36.107 "method": "bdev_iscsi_set_options", 00:54:36.107 "params": { 00:54:36.107 "timeout_sec": 30 00:54:36.107 } 00:54:36.107 }, 00:54:36.107 { 00:54:36.107 "method": "bdev_nvme_set_options", 00:54:36.107 "params": { 00:54:36.107 "action_on_timeout": "none", 00:54:36.107 "timeout_us": 0, 00:54:36.107 "timeout_admin_us": 0, 00:54:36.107 "keep_alive_timeout_ms": 10000, 00:54:36.107 "arbitration_burst": 0, 00:54:36.107 "low_priority_weight": 0, 00:54:36.107 "medium_priority_weight": 0, 00:54:36.107 "high_priority_weight": 0, 00:54:36.107 "nvme_adminq_poll_period_us": 10000, 00:54:36.107 "nvme_ioq_poll_period_us": 0, 00:54:36.107 "io_queue_requests": 0, 00:54:36.107 "delay_cmd_submit": true, 00:54:36.107 "transport_retry_count": 4, 00:54:36.107 "bdev_retry_count": 3, 00:54:36.107 "transport_ack_timeout": 0, 00:54:36.107 "ctrlr_loss_timeout_sec": 0, 00:54:36.107 "reconnect_delay_sec": 0, 00:54:36.107 "fast_io_fail_timeout_sec": 0, 00:54:36.107 "disable_auto_failback": false, 00:54:36.107 "generate_uuids": false, 00:54:36.107 "transport_tos": 0, 00:54:36.107 "nvme_error_stat": false, 00:54:36.107 "rdma_srq_size": 0, 00:54:36.108 "io_path_stat": false, 00:54:36.108 "allow_accel_sequence": false, 00:54:36.108 "rdma_max_cq_size": 0, 00:54:36.108 "rdma_cm_event_timeout_ms": 0, 00:54:36.108 "dhchap_digests": [ 00:54:36.108 "sha256", 00:54:36.108 "sha384", 00:54:36.108 "sha512" 00:54:36.108 ], 00:54:36.108 "dhchap_dhgroups": [ 00:54:36.108 "null", 00:54:36.108 "ffdhe2048", 00:54:36.108 "ffdhe3072", 00:54:36.108 "ffdhe4096", 00:54:36.108 "ffdhe6144", 00:54:36.108 "ffdhe8192" 00:54:36.108 ] 00:54:36.108 } 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "method": "bdev_nvme_set_hotplug", 00:54:36.108 "params": { 00:54:36.108 "period_us": 100000, 00:54:36.108 "enable": false 00:54:36.108 } 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "method": "bdev_malloc_create", 00:54:36.108 "params": { 00:54:36.108 "name": "malloc0", 00:54:36.108 "num_blocks": 8192, 00:54:36.108 "block_size": 4096, 00:54:36.108 "physical_block_size": 4096, 00:54:36.108 "uuid": "8e8969a9-e76f-4571-a99b-c78dcc48d333", 00:54:36.108 "optimal_io_boundary": 0, 00:54:36.108 "md_size": 0, 00:54:36.108 "dif_type": 0, 00:54:36.108 "dif_is_head_of_md": false, 00:54:36.108 "dif_pi_format": 0 00:54:36.108 } 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "method": "bdev_wait_for_examine" 00:54:36.108 } 00:54:36.108 ] 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "subsystem": "nbd", 00:54:36.108 "config": [] 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "subsystem": "scheduler", 00:54:36.108 "config": [ 00:54:36.108 { 00:54:36.108 "method": "framework_set_scheduler", 00:54:36.108 "params": { 00:54:36.108 "name": "static" 00:54:36.108 } 00:54:36.108 } 00:54:36.108 ] 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "subsystem": "nvmf", 00:54:36.108 "config": [ 00:54:36.108 { 00:54:36.108 "method": "nvmf_set_config", 00:54:36.108 "params": { 00:54:36.108 "discovery_filter": "match_any", 00:54:36.108 "admin_cmd_passthru": { 00:54:36.108 "identify_ctrlr": false 00:54:36.108 }, 00:54:36.108 "dhchap_digests": [ 00:54:36.108 "sha256", 00:54:36.108 "sha384", 00:54:36.108 "sha512" 00:54:36.108 ], 00:54:36.108 "dhchap_dhgroups": [ 00:54:36.108 "null", 00:54:36.108 "ffdhe2048", 00:54:36.108 "ffdhe3072", 00:54:36.108 "ffdhe4096", 00:54:36.108 "ffdhe6144", 00:54:36.108 "ffdhe8192" 00:54:36.108 ] 00:54:36.108 } 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "method": "nvmf_set_max_subsystems", 00:54:36.108 "params": { 00:54:36.108 "max_subsystems": 1024 00:54:36.108 } 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "method": "nvmf_set_crdt", 00:54:36.108 "params": { 00:54:36.108 "crdt1": 0, 00:54:36.108 "crdt2": 0, 00:54:36.108 "crdt3": 0 00:54:36.108 } 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "method": "nvmf_create_transport", 00:54:36.108 "params": { 00:54:36.108 "trtype": "TCP", 00:54:36.108 "max_queue_depth": 128, 00:54:36.108 "max_io_qpairs_per_ctrlr": 127, 00:54:36.108 "in_capsule_data_size": 4096, 00:54:36.108 "max_io_size": 131072, 00:54:36.108 "io_unit_size": 131072, 00:54:36.108 "max_aq_depth": 128, 00:54:36.108 "num_shared_buffers": 511, 00:54:36.108 "buf_cache_size": 4294967295, 00:54:36.108 "dif_insert_or_strip": false, 00:54:36.108 "zcopy": false, 00:54:36.108 "c2h_success": false, 00:54:36.108 "sock_priority": 0, 00:54:36.108 "abort_timeout_sec": 1, 00:54:36.108 "ack_timeout": 0, 00:54:36.108 "data_wr_pool_size": 0 00:54:36.108 } 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "method": "nvmf_create_subsystem", 00:54:36.108 "params": { 00:54:36.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:36.108 "allow_any_host": false, 00:54:36.108 "serial_number": "SPDK00000000000001", 00:54:36.108 "model_number": "SPDK bdev Controller", 00:54:36.108 "max_namespaces": 10, 00:54:36.108 "min_cntlid": 1, 00:54:36.108 "max_cntlid": 65519, 00:54:36.108 "ana_reporting": false 00:54:36.108 } 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "method": "nvmf_subsystem_add_host", 00:54:36.108 "params": { 00:54:36.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:36.108 "host": "nqn.2016-06.io.spdk:host1", 00:54:36.108 "psk": "key0" 00:54:36.108 } 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "method": "nvmf_subsystem_add_ns", 00:54:36.108 "params": { 00:54:36.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:36.108 "namespace": { 00:54:36.108 "nsid": 1, 00:54:36.108 "bdev_name": "malloc0", 00:54:36.108 "nguid": "8E8969A9E76F4571A99BC78DCC48D333", 00:54:36.108 "uuid": "8e8969a9-e76f-4571-a99b-c78dcc48d333", 00:54:36.108 "no_auto_visible": false 00:54:36.108 } 00:54:36.108 } 00:54:36.108 }, 00:54:36.108 { 00:54:36.108 "method": "nvmf_subsystem_add_listener", 00:54:36.108 "params": { 00:54:36.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:36.108 "listen_address": { 00:54:36.108 "trtype": "TCP", 00:54:36.108 "adrfam": "IPv4", 00:54:36.108 "traddr": "10.0.0.2", 00:54:36.108 "trsvcid": "4420" 00:54:36.108 }, 00:54:36.108 "secure_channel": true 00:54:36.108 } 00:54:36.108 } 00:54:36.108 ] 00:54:36.108 } 00:54:36.108 ] 00:54:36.108 }' 00:54:36.108 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:54:36.367 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:54:36.367 "subsystems": [ 00:54:36.367 { 00:54:36.367 "subsystem": "keyring", 00:54:36.367 "config": [ 00:54:36.367 { 00:54:36.367 "method": "keyring_file_add_key", 00:54:36.367 "params": { 00:54:36.367 "name": "key0", 00:54:36.367 "path": "/tmp/tmp.rociq3ElhS" 00:54:36.367 } 00:54:36.367 } 00:54:36.367 ] 00:54:36.367 }, 00:54:36.367 { 00:54:36.367 "subsystem": "iobuf", 00:54:36.367 "config": [ 00:54:36.367 { 00:54:36.367 "method": "iobuf_set_options", 00:54:36.367 "params": { 00:54:36.367 "small_pool_count": 8192, 00:54:36.367 "large_pool_count": 1024, 00:54:36.367 "small_bufsize": 8192, 00:54:36.367 "large_bufsize": 135168, 00:54:36.367 "enable_numa": false 00:54:36.367 } 00:54:36.367 } 00:54:36.367 ] 00:54:36.367 }, 00:54:36.367 { 00:54:36.367 "subsystem": "sock", 00:54:36.367 "config": [ 00:54:36.367 { 00:54:36.367 "method": "sock_set_default_impl", 00:54:36.367 "params": { 00:54:36.367 "impl_name": "posix" 00:54:36.367 } 00:54:36.367 }, 00:54:36.367 { 00:54:36.367 "method": "sock_impl_set_options", 00:54:36.367 "params": { 00:54:36.367 "impl_name": "ssl", 00:54:36.367 "recv_buf_size": 4096, 00:54:36.367 "send_buf_size": 4096, 00:54:36.367 "enable_recv_pipe": true, 00:54:36.367 "enable_quickack": false, 00:54:36.367 "enable_placement_id": 0, 00:54:36.367 "enable_zerocopy_send_server": true, 00:54:36.367 "enable_zerocopy_send_client": false, 00:54:36.367 "zerocopy_threshold": 0, 00:54:36.367 "tls_version": 0, 00:54:36.367 "enable_ktls": false 00:54:36.367 } 00:54:36.367 }, 00:54:36.367 { 00:54:36.367 "method": "sock_impl_set_options", 00:54:36.367 "params": { 00:54:36.367 "impl_name": "posix", 00:54:36.367 "recv_buf_size": 2097152, 00:54:36.367 "send_buf_size": 2097152, 00:54:36.367 "enable_recv_pipe": true, 00:54:36.367 "enable_quickack": false, 00:54:36.367 "enable_placement_id": 0, 00:54:36.367 "enable_zerocopy_send_server": true, 00:54:36.367 "enable_zerocopy_send_client": false, 00:54:36.367 "zerocopy_threshold": 0, 00:54:36.367 "tls_version": 0, 00:54:36.367 "enable_ktls": false 00:54:36.367 } 00:54:36.367 } 00:54:36.367 ] 00:54:36.367 }, 00:54:36.367 { 00:54:36.367 "subsystem": "vmd", 00:54:36.367 "config": [] 00:54:36.367 }, 00:54:36.367 { 00:54:36.367 "subsystem": "accel", 00:54:36.367 "config": [ 00:54:36.367 { 00:54:36.367 "method": "accel_set_options", 00:54:36.367 "params": { 00:54:36.367 "small_cache_size": 128, 00:54:36.367 "large_cache_size": 16, 00:54:36.367 "task_count": 2048, 00:54:36.367 "sequence_count": 2048, 00:54:36.367 "buf_count": 2048 00:54:36.367 } 00:54:36.367 } 00:54:36.367 ] 00:54:36.367 }, 00:54:36.367 { 00:54:36.368 "subsystem": "bdev", 00:54:36.368 "config": [ 00:54:36.368 { 00:54:36.368 "method": "bdev_set_options", 00:54:36.368 "params": { 00:54:36.368 "bdev_io_pool_size": 65535, 00:54:36.368 "bdev_io_cache_size": 256, 00:54:36.368 "bdev_auto_examine": true, 00:54:36.368 "iobuf_small_cache_size": 128, 00:54:36.368 "iobuf_large_cache_size": 16 00:54:36.368 } 00:54:36.368 }, 00:54:36.368 { 00:54:36.368 "method": "bdev_raid_set_options", 00:54:36.368 "params": { 00:54:36.368 "process_window_size_kb": 1024, 00:54:36.368 "process_max_bandwidth_mb_sec": 0 00:54:36.368 } 00:54:36.368 }, 00:54:36.368 { 00:54:36.368 "method": "bdev_iscsi_set_options", 00:54:36.368 "params": { 00:54:36.368 "timeout_sec": 30 00:54:36.368 } 00:54:36.368 }, 00:54:36.368 { 00:54:36.368 "method": "bdev_nvme_set_options", 00:54:36.368 "params": { 00:54:36.368 "action_on_timeout": "none", 00:54:36.368 "timeout_us": 0, 00:54:36.368 "timeout_admin_us": 0, 00:54:36.368 "keep_alive_timeout_ms": 10000, 00:54:36.368 "arbitration_burst": 0, 00:54:36.368 "low_priority_weight": 0, 00:54:36.368 "medium_priority_weight": 0, 00:54:36.368 "high_priority_weight": 0, 00:54:36.368 "nvme_adminq_poll_period_us": 10000, 00:54:36.368 "nvme_ioq_poll_period_us": 0, 00:54:36.368 "io_queue_requests": 512, 00:54:36.368 "delay_cmd_submit": true, 00:54:36.368 "transport_retry_count": 4, 00:54:36.368 "bdev_retry_count": 3, 00:54:36.368 "transport_ack_timeout": 0, 00:54:36.368 "ctrlr_loss_timeout_sec": 0, 00:54:36.368 "reconnect_delay_sec": 0, 00:54:36.368 "fast_io_fail_timeout_sec": 0, 00:54:36.368 "disable_auto_failback": false, 00:54:36.368 "generate_uuids": false, 00:54:36.368 "transport_tos": 0, 00:54:36.368 "nvme_error_stat": false, 00:54:36.368 "rdma_srq_size": 0, 00:54:36.368 "io_path_stat": false, 00:54:36.368 "allow_accel_sequence": false, 00:54:36.368 "rdma_max_cq_size": 0, 00:54:36.368 "rdma_cm_event_timeout_ms": 0, 00:54:36.368 "dhchap_digests": [ 00:54:36.368 "sha256", 00:54:36.368 "sha384", 00:54:36.368 "sha512" 00:54:36.368 ], 00:54:36.368 "dhchap_dhgroups": [ 00:54:36.368 "null", 00:54:36.368 "ffdhe2048", 00:54:36.368 "ffdhe3072", 00:54:36.368 "ffdhe4096", 00:54:36.368 "ffdhe6144", 00:54:36.368 "ffdhe8192" 00:54:36.368 ] 00:54:36.368 } 00:54:36.368 }, 00:54:36.368 { 00:54:36.368 "method": "bdev_nvme_attach_controller", 00:54:36.368 "params": { 00:54:36.368 "name": "TLSTEST", 00:54:36.368 "trtype": "TCP", 00:54:36.368 "adrfam": "IPv4", 00:54:36.368 "traddr": "10.0.0.2", 00:54:36.368 "trsvcid": "4420", 00:54:36.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:36.368 "prchk_reftag": false, 00:54:36.368 "prchk_guard": false, 00:54:36.368 "ctrlr_loss_timeout_sec": 0, 00:54:36.368 "reconnect_delay_sec": 0, 00:54:36.368 "fast_io_fail_timeout_sec": 0, 00:54:36.368 "psk": "key0", 00:54:36.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:36.368 "hdgst": false, 00:54:36.368 "ddgst": false, 00:54:36.368 "multipath": "multipath" 00:54:36.368 } 00:54:36.368 }, 00:54:36.368 { 00:54:36.368 "method": "bdev_nvme_set_hotplug", 00:54:36.368 "params": { 00:54:36.368 "period_us": 100000, 00:54:36.368 "enable": false 00:54:36.368 } 00:54:36.368 }, 00:54:36.368 { 00:54:36.368 "method": "bdev_wait_for_examine" 00:54:36.368 } 00:54:36.368 ] 00:54:36.368 }, 00:54:36.368 { 00:54:36.368 "subsystem": "nbd", 00:54:36.368 "config": [] 00:54:36.368 } 00:54:36.368 ] 00:54:36.368 }' 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2431238 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2431238 ']' 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2431238 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431238 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431238' 00:54:36.368 killing process with pid 2431238 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2431238 00:54:36.368 Received shutdown signal, test time was about 10.000000 seconds 00:54:36.368 00:54:36.368 Latency(us) 00:54:36.368 [2024-12-09T10:05:37.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:36.368 [2024-12-09T10:05:37.544Z] =================================================================================================================== 00:54:36.368 [2024-12-09T10:05:37.544Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:54:36.368 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2431238 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2431025 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2431025 ']' 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2431025 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431025 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431025' 00:54:36.626 killing process with pid 2431025 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2431025 00:54:36.626 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2431025 00:54:36.886 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:54:36.886 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:36.886 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:36.886 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:36.886 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:54:36.886 "subsystems": [ 00:54:36.886 { 00:54:36.886 "subsystem": "keyring", 00:54:36.886 "config": [ 00:54:36.886 { 00:54:36.886 "method": "keyring_file_add_key", 00:54:36.886 "params": { 00:54:36.886 "name": "key0", 00:54:36.886 "path": "/tmp/tmp.rociq3ElhS" 00:54:36.886 } 00:54:36.886 } 00:54:36.886 ] 00:54:36.886 }, 00:54:36.886 { 00:54:36.886 "subsystem": "iobuf", 00:54:36.886 "config": [ 00:54:36.886 { 00:54:36.886 "method": "iobuf_set_options", 00:54:36.886 "params": { 00:54:36.886 "small_pool_count": 8192, 00:54:36.886 "large_pool_count": 1024, 00:54:36.886 "small_bufsize": 8192, 00:54:36.886 "large_bufsize": 135168, 00:54:36.886 "enable_numa": false 00:54:36.886 } 00:54:36.886 } 00:54:36.886 ] 00:54:36.886 }, 00:54:36.886 { 00:54:36.886 "subsystem": "sock", 00:54:36.886 "config": [ 00:54:36.886 { 00:54:36.886 "method": "sock_set_default_impl", 00:54:36.886 "params": { 00:54:36.886 "impl_name": "posix" 00:54:36.886 } 00:54:36.886 }, 00:54:36.886 { 00:54:36.886 "method": "sock_impl_set_options", 00:54:36.886 "params": { 00:54:36.886 "impl_name": "ssl", 00:54:36.886 "recv_buf_size": 4096, 00:54:36.886 "send_buf_size": 4096, 00:54:36.886 "enable_recv_pipe": true, 00:54:36.886 "enable_quickack": false, 00:54:36.886 "enable_placement_id": 0, 00:54:36.886 "enable_zerocopy_send_server": true, 00:54:36.886 "enable_zerocopy_send_client": false, 00:54:36.886 "zerocopy_threshold": 0, 00:54:36.886 "tls_version": 0, 00:54:36.886 "enable_ktls": false 00:54:36.886 } 00:54:36.886 }, 00:54:36.886 { 00:54:36.886 "method": "sock_impl_set_options", 00:54:36.886 "params": { 00:54:36.886 "impl_name": "posix", 00:54:36.886 "recv_buf_size": 2097152, 00:54:36.886 "send_buf_size": 2097152, 00:54:36.886 "enable_recv_pipe": true, 00:54:36.886 "enable_quickack": false, 00:54:36.886 "enable_placement_id": 0, 00:54:36.886 "enable_zerocopy_send_server": true, 00:54:36.886 "enable_zerocopy_send_client": false, 00:54:36.886 "zerocopy_threshold": 0, 00:54:36.886 "tls_version": 0, 00:54:36.886 "enable_ktls": false 00:54:36.886 } 00:54:36.886 } 00:54:36.886 ] 00:54:36.886 }, 00:54:36.886 { 00:54:36.886 "subsystem": "vmd", 00:54:36.886 "config": [] 00:54:36.886 }, 00:54:36.886 { 00:54:36.886 "subsystem": "accel", 00:54:36.886 "config": [ 00:54:36.886 { 00:54:36.886 "method": "accel_set_options", 00:54:36.886 "params": { 00:54:36.886 "small_cache_size": 128, 00:54:36.886 "large_cache_size": 16, 00:54:36.886 "task_count": 2048, 00:54:36.886 "sequence_count": 2048, 00:54:36.886 "buf_count": 2048 00:54:36.886 } 00:54:36.886 } 00:54:36.886 ] 00:54:36.886 }, 00:54:36.886 { 00:54:36.886 "subsystem": "bdev", 00:54:36.886 "config": [ 00:54:36.886 { 00:54:36.886 "method": "bdev_set_options", 00:54:36.886 "params": { 00:54:36.886 "bdev_io_pool_size": 65535, 00:54:36.886 "bdev_io_cache_size": 256, 00:54:36.886 "bdev_auto_examine": true, 00:54:36.886 "iobuf_small_cache_size": 128, 00:54:36.887 "iobuf_large_cache_size": 16 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "bdev_raid_set_options", 00:54:36.887 "params": { 00:54:36.887 "process_window_size_kb": 1024, 00:54:36.887 "process_max_bandwidth_mb_sec": 0 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "bdev_iscsi_set_options", 00:54:36.887 "params": { 00:54:36.887 "timeout_sec": 30 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "bdev_nvme_set_options", 00:54:36.887 "params": { 00:54:36.887 "action_on_timeout": "none", 00:54:36.887 "timeout_us": 0, 00:54:36.887 "timeout_admin_us": 0, 00:54:36.887 "keep_alive_timeout_ms": 10000, 00:54:36.887 "arbitration_burst": 0, 00:54:36.887 "low_priority_weight": 0, 00:54:36.887 "medium_priority_weight": 0, 00:54:36.887 "high_priority_weight": 0, 00:54:36.887 "nvme_adminq_poll_period_us": 10000, 00:54:36.887 "nvme_ioq_poll_period_us": 0, 00:54:36.887 "io_queue_requests": 0, 00:54:36.887 "delay_cmd_submit": true, 00:54:36.887 "transport_retry_count": 4, 00:54:36.887 "bdev_retry_count": 3, 00:54:36.887 "transport_ack_timeout": 0, 00:54:36.887 "ctrlr_loss_timeout_sec": 0, 00:54:36.887 "reconnect_delay_sec": 0, 00:54:36.887 "fast_io_fail_timeout_sec": 0, 00:54:36.887 "disable_auto_failback": false, 00:54:36.887 "generate_uuids": false, 00:54:36.887 "transport_tos": 0, 00:54:36.887 "nvme_error_stat": false, 00:54:36.887 "rdma_srq_size": 0, 00:54:36.887 "io_path_stat": false, 00:54:36.887 "allow_accel_sequence": false, 00:54:36.887 "rdma_max_cq_size": 0, 00:54:36.887 "rdma_cm_event_timeout_ms": 0, 00:54:36.887 "dhchap_digests": [ 00:54:36.887 "sha256", 00:54:36.887 "sha384", 00:54:36.887 "sha512" 00:54:36.887 ], 00:54:36.887 "dhchap_dhgroups": [ 00:54:36.887 "null", 00:54:36.887 "ffdhe2048", 00:54:36.887 "ffdhe3072", 00:54:36.887 "ffdhe4096", 00:54:36.887 "ffdhe6144", 00:54:36.887 "ffdhe8192" 00:54:36.887 ] 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "bdev_nvme_set_hotplug", 00:54:36.887 "params": { 00:54:36.887 "period_us": 100000, 00:54:36.887 "enable": false 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "bdev_malloc_create", 00:54:36.887 "params": { 00:54:36.887 "name": "malloc0", 00:54:36.887 "num_blocks": 8192, 00:54:36.887 "block_size": 4096, 00:54:36.887 "physical_block_size": 4096, 00:54:36.887 "uuid": "8e8969a9-e76f-4571-a99b-c78dcc48d333", 00:54:36.887 "optimal_io_boundary": 0, 00:54:36.887 "md_size": 0, 00:54:36.887 "dif_type": 0, 00:54:36.887 "dif_is_head_of_md": false, 00:54:36.887 "dif_pi_format": 0 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "bdev_wait_for_examine" 00:54:36.887 } 00:54:36.887 ] 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "subsystem": "nbd", 00:54:36.887 "config": [] 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "subsystem": "scheduler", 00:54:36.887 "config": [ 00:54:36.887 { 00:54:36.887 "method": "framework_set_scheduler", 00:54:36.887 "params": { 00:54:36.887 "name": "static" 00:54:36.887 } 00:54:36.887 } 00:54:36.887 ] 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "subsystem": "nvmf", 00:54:36.887 "config": [ 00:54:36.887 { 00:54:36.887 "method": "nvmf_set_config", 00:54:36.887 "params": { 00:54:36.887 "discovery_filter": "match_any", 00:54:36.887 "admin_cmd_passthru": { 00:54:36.887 "identify_ctrlr": false 00:54:36.887 }, 00:54:36.887 "dhchap_digests": [ 00:54:36.887 "sha256", 00:54:36.887 "sha384", 00:54:36.887 "sha512" 00:54:36.887 ], 00:54:36.887 "dhchap_dhgroups": [ 00:54:36.887 "null", 00:54:36.887 "ffdhe2048", 00:54:36.887 "ffdhe3072", 00:54:36.887 "ffdhe4096", 00:54:36.887 "ffdhe6144", 00:54:36.887 "ffdhe8192" 00:54:36.887 ] 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "nvmf_set_max_subsystems", 00:54:36.887 "params": { 00:54:36.887 "max_subsystems": 1024 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "nvmf_set_crdt", 00:54:36.887 "params": { 00:54:36.887 "crdt1": 0, 00:54:36.887 "crdt2": 0, 00:54:36.887 "crdt3": 0 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "nvmf_create_transport", 00:54:36.887 "params": { 00:54:36.887 "trtype": "TCP", 00:54:36.887 "max_queue_depth": 128, 00:54:36.887 "max_io_qpairs_per_ctrlr": 127, 00:54:36.887 "in_capsule_data_size": 4096, 00:54:36.887 "max_io_size": 131072, 00:54:36.887 "io_unit_size": 131072, 00:54:36.887 "max_aq_depth": 128, 00:54:36.887 "num_shared_buffers": 511, 00:54:36.887 "buf_cache_size": 4294967295, 00:54:36.887 "dif_insert_or_strip": false, 00:54:36.887 "zcopy": false, 00:54:36.887 "c2h_success": false, 00:54:36.887 "sock_priority": 0, 00:54:36.887 "abort_timeout_sec": 1, 00:54:36.887 "ack_timeout": 0, 00:54:36.887 "data_wr_pool_size": 0 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "nvmf_create_subsystem", 00:54:36.887 "params": { 00:54:36.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:36.887 "allow_any_host": false, 00:54:36.887 "serial_number": "SPDK00000000000001", 00:54:36.887 "model_number": "SPDK bdev Controller", 00:54:36.887 "max_namespaces": 10, 00:54:36.887 "min_cntlid": 1, 00:54:36.887 "max_cntlid": 65519, 00:54:36.887 "ana_reporting": false 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "nvmf_subsystem_add_host", 00:54:36.887 "params": { 00:54:36.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:36.887 "host": "nqn.2016-06.io.spdk:host1", 00:54:36.887 "psk": "key0" 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "nvmf_subsystem_add_ns", 00:54:36.887 "params": { 00:54:36.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:36.887 "namespace": { 00:54:36.887 "nsid": 1, 00:54:36.887 "bdev_name": "malloc0", 00:54:36.887 "nguid": "8E8969A9E76F4571A99BC78DCC48D333", 00:54:36.887 "uuid": "8e8969a9-e76f-4571-a99b-c78dcc48d333", 00:54:36.887 "no_auto_visible": false 00:54:36.887 } 00:54:36.887 } 00:54:36.887 }, 00:54:36.887 { 00:54:36.887 "method": "nvmf_subsystem_add_listener", 00:54:36.887 "params": { 00:54:36.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:36.887 "listen_address": { 00:54:36.887 "trtype": "TCP", 00:54:36.887 "adrfam": "IPv4", 00:54:36.887 "traddr": "10.0.0.2", 00:54:36.887 "trsvcid": "4420" 00:54:36.887 }, 00:54:36.887 "secure_channel": true 00:54:36.887 } 00:54:36.887 } 00:54:36.887 ] 00:54:36.887 } 00:54:36.887 ] 00:54:36.887 }' 00:54:36.887 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2431598 00:54:36.887 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2431598 00:54:36.887 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2431598 ']' 00:54:36.887 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:36.887 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:36.887 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:36.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:36.887 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:36.887 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:36.887 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:54:36.887 [2024-12-09 11:05:37.921814] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:36.887 [2024-12-09 11:05:37.921886] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:36.887 [2024-12-09 11:05:38.022450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:37.147 [2024-12-09 11:05:38.066516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:37.147 [2024-12-09 11:05:38.066555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:37.147 [2024-12-09 11:05:38.066566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:37.147 [2024-12-09 11:05:38.066577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:37.147 [2024-12-09 11:05:38.066585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:37.147 [2024-12-09 11:05:38.067141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:37.147 [2024-12-09 11:05:38.293210] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:37.405 [2024-12-09 11:05:38.325243] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:54:37.405 [2024-12-09 11:05:38.325470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:37.662 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:37.662 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:37.662 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:37.662 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:37.662 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2431785 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2431785 /var/tmp/bdevperf.sock 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2431785 ']' 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:37.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:54:37.921 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:54:37.921 "subsystems": [ 00:54:37.921 { 00:54:37.921 "subsystem": "keyring", 00:54:37.921 "config": [ 00:54:37.921 { 00:54:37.921 "method": "keyring_file_add_key", 00:54:37.921 "params": { 00:54:37.921 "name": "key0", 00:54:37.921 "path": "/tmp/tmp.rociq3ElhS" 00:54:37.921 } 00:54:37.921 } 00:54:37.921 ] 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "subsystem": "iobuf", 00:54:37.921 "config": [ 00:54:37.921 { 00:54:37.921 "method": "iobuf_set_options", 00:54:37.921 "params": { 00:54:37.921 "small_pool_count": 8192, 00:54:37.921 "large_pool_count": 1024, 00:54:37.921 "small_bufsize": 8192, 00:54:37.921 "large_bufsize": 135168, 00:54:37.921 "enable_numa": false 00:54:37.921 } 00:54:37.921 } 00:54:37.921 ] 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "subsystem": "sock", 00:54:37.921 "config": [ 00:54:37.921 { 00:54:37.921 "method": "sock_set_default_impl", 00:54:37.921 "params": { 00:54:37.921 "impl_name": "posix" 00:54:37.921 } 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "method": "sock_impl_set_options", 00:54:37.921 "params": { 00:54:37.921 "impl_name": "ssl", 00:54:37.921 "recv_buf_size": 4096, 00:54:37.921 "send_buf_size": 4096, 00:54:37.921 "enable_recv_pipe": true, 00:54:37.921 "enable_quickack": false, 00:54:37.921 "enable_placement_id": 0, 00:54:37.921 "enable_zerocopy_send_server": true, 00:54:37.921 "enable_zerocopy_send_client": false, 00:54:37.921 "zerocopy_threshold": 0, 00:54:37.921 "tls_version": 0, 00:54:37.921 "enable_ktls": false 00:54:37.921 } 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "method": "sock_impl_set_options", 00:54:37.921 "params": { 00:54:37.921 "impl_name": "posix", 00:54:37.921 "recv_buf_size": 2097152, 00:54:37.921 "send_buf_size": 2097152, 00:54:37.921 "enable_recv_pipe": true, 00:54:37.921 "enable_quickack": false, 00:54:37.921 "enable_placement_id": 0, 00:54:37.921 "enable_zerocopy_send_server": true, 00:54:37.921 "enable_zerocopy_send_client": false, 00:54:37.921 "zerocopy_threshold": 0, 00:54:37.921 "tls_version": 0, 00:54:37.921 "enable_ktls": false 00:54:37.921 } 00:54:37.921 } 00:54:37.921 ] 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "subsystem": "vmd", 00:54:37.921 "config": [] 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "subsystem": "accel", 00:54:37.921 "config": [ 00:54:37.921 { 00:54:37.921 "method": "accel_set_options", 00:54:37.921 "params": { 00:54:37.921 "small_cache_size": 128, 00:54:37.921 "large_cache_size": 16, 00:54:37.921 "task_count": 2048, 00:54:37.921 "sequence_count": 2048, 00:54:37.921 "buf_count": 2048 00:54:37.921 } 00:54:37.921 } 00:54:37.921 ] 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "subsystem": "bdev", 00:54:37.921 "config": [ 00:54:37.921 { 00:54:37.921 "method": "bdev_set_options", 00:54:37.921 "params": { 00:54:37.921 "bdev_io_pool_size": 65535, 00:54:37.921 "bdev_io_cache_size": 256, 00:54:37.921 "bdev_auto_examine": true, 00:54:37.921 "iobuf_small_cache_size": 128, 00:54:37.921 "iobuf_large_cache_size": 16 00:54:37.921 } 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "method": "bdev_raid_set_options", 00:54:37.921 "params": { 00:54:37.921 "process_window_size_kb": 1024, 00:54:37.921 "process_max_bandwidth_mb_sec": 0 00:54:37.921 } 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "method": "bdev_iscsi_set_options", 00:54:37.921 "params": { 00:54:37.921 "timeout_sec": 30 00:54:37.921 } 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "method": "bdev_nvme_set_options", 00:54:37.921 "params": { 00:54:37.921 "action_on_timeout": "none", 00:54:37.921 "timeout_us": 0, 00:54:37.921 "timeout_admin_us": 0, 00:54:37.921 "keep_alive_timeout_ms": 10000, 00:54:37.921 "arbitration_burst": 0, 00:54:37.921 "low_priority_weight": 0, 00:54:37.921 "medium_priority_weight": 0, 00:54:37.921 "high_priority_weight": 0, 00:54:37.921 "nvme_adminq_poll_period_us": 10000, 00:54:37.921 "nvme_ioq_poll_period_us": 0, 00:54:37.921 "io_queue_requests": 512, 00:54:37.921 "delay_cmd_submit": true, 00:54:37.921 "transport_retry_count": 4, 00:54:37.921 "bdev_retry_count": 3, 00:54:37.921 "transport_ack_timeout": 0, 00:54:37.921 "ctrlr_loss_timeout_sec": 0, 00:54:37.921 "reconnect_delay_sec": 0, 00:54:37.921 "fast_io_fail_timeout_sec": 0, 00:54:37.921 "disable_auto_failback": false, 00:54:37.921 "generate_uuids": false, 00:54:37.921 "transport_tos": 0, 00:54:37.921 "nvme_error_stat": false, 00:54:37.921 "rdma_srq_size": 0, 00:54:37.921 "io_path_stat": false, 00:54:37.921 "allow_accel_sequence": false, 00:54:37.921 "rdma_max_cq_size": 0, 00:54:37.921 "rdma_cm_event_timeout_ms": 0, 00:54:37.921 "dhchap_digests": [ 00:54:37.921 "sha256", 00:54:37.921 "sha384", 00:54:37.921 "sha512" 00:54:37.921 ], 00:54:37.921 "dhchap_dhgroups": [ 00:54:37.921 "null", 00:54:37.921 "ffdhe2048", 00:54:37.921 "ffdhe3072", 00:54:37.921 "ffdhe4096", 00:54:37.921 "ffdhe6144", 00:54:37.921 "ffdhe8192" 00:54:37.921 ] 00:54:37.921 } 00:54:37.921 }, 00:54:37.921 { 00:54:37.921 "method": "bdev_nvme_attach_controller", 00:54:37.921 "params": { 00:54:37.921 "name": "TLSTEST", 00:54:37.921 "trtype": "TCP", 00:54:37.921 "adrfam": "IPv4", 00:54:37.921 "traddr": "10.0.0.2", 00:54:37.921 "trsvcid": "4420", 00:54:37.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:37.921 "prchk_reftag": false, 00:54:37.922 "prchk_guard": false, 00:54:37.922 "ctrlr_loss_timeout_sec": 0, 00:54:37.922 "reconnect_delay_sec": 0, 00:54:37.922 "fast_io_fail_timeout_sec": 0, 00:54:37.922 "psk": "key0", 00:54:37.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:37.922 "hdgst": false, 00:54:37.922 "ddgst": false, 00:54:37.922 "multipath": "multipath" 00:54:37.922 } 00:54:37.922 }, 00:54:37.922 { 00:54:37.922 "method": "bdev_nvme_set_hotplug", 00:54:37.922 "params": { 00:54:37.922 "period_us": 100000, 00:54:37.922 "enable": false 00:54:37.922 } 00:54:37.922 }, 00:54:37.922 { 00:54:37.922 "method": "bdev_wait_for_examine" 00:54:37.922 } 00:54:37.922 ] 00:54:37.922 }, 00:54:37.922 { 00:54:37.922 "subsystem": "nbd", 00:54:37.922 "config": [] 00:54:37.922 } 00:54:37.922 ] 00:54:37.922 }' 00:54:37.922 [2024-12-09 11:05:38.915145] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:37.922 [2024-12-09 11:05:38.915222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431785 ] 00:54:37.922 [2024-12-09 11:05:39.013617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:37.922 [2024-12-09 11:05:39.058372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:38.179 [2024-12-09 11:05:39.212570] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:38.743 11:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:38.743 11:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:38.743 11:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:54:39.000 Running I/O for 10 seconds... 00:54:40.864 4883.00 IOPS, 19.07 MiB/s [2024-12-09T10:05:43.413Z] 4737.00 IOPS, 18.50 MiB/s [2024-12-09T10:05:44.346Z] 4849.67 IOPS, 18.94 MiB/s [2024-12-09T10:05:45.292Z] 4908.00 IOPS, 19.17 MiB/s [2024-12-09T10:05:46.233Z] 4919.20 IOPS, 19.22 MiB/s [2024-12-09T10:05:47.166Z] 4944.33 IOPS, 19.31 MiB/s [2024-12-09T10:05:48.113Z] 4930.71 IOPS, 19.26 MiB/s [2024-12-09T10:05:49.128Z] 4892.12 IOPS, 19.11 MiB/s [2024-12-09T10:05:50.122Z] 4909.56 IOPS, 19.18 MiB/s [2024-12-09T10:05:50.122Z] 4942.30 IOPS, 19.31 MiB/s 00:54:48.946 Latency(us) 00:54:48.946 [2024-12-09T10:05:50.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:48.946 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:54:48.946 Verification LBA range: start 0x0 length 0x2000 00:54:48.946 TLSTESTn1 : 10.03 4941.32 19.30 0.00 0.00 25858.71 6211.67 32824.99 00:54:48.946 [2024-12-09T10:05:50.122Z] =================================================================================================================== 00:54:48.946 [2024-12-09T10:05:50.122Z] Total : 4941.32 19.30 0.00 0.00 25858.71 6211.67 32824.99 00:54:48.946 { 00:54:48.946 "results": [ 00:54:48.946 { 00:54:48.946 "job": "TLSTESTn1", 00:54:48.946 "core_mask": "0x4", 00:54:48.946 "workload": "verify", 00:54:48.946 "status": "finished", 00:54:48.946 "verify_range": { 00:54:48.946 "start": 0, 00:54:48.946 "length": 8192 00:54:48.946 }, 00:54:48.946 "queue_depth": 128, 00:54:48.946 "io_size": 4096, 00:54:48.946 "runtime": 10.027474, 00:54:48.946 "iops": 4941.324205876774, 00:54:48.946 "mibps": 19.302047679206147, 00:54:48.946 "io_failed": 0, 00:54:48.946 "io_timeout": 0, 00:54:48.946 "avg_latency_us": 25858.706554056724, 00:54:48.946 "min_latency_us": 6211.673043478261, 00:54:48.946 "max_latency_us": 32824.98782608696 00:54:48.946 } 00:54:48.946 ], 00:54:48.946 "core_count": 1 00:54:48.946 } 00:54:48.946 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:48.946 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2431785 00:54:48.946 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2431785 ']' 00:54:48.946 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2431785 00:54:48.946 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:48.946 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:48.946 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431785 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431785' 00:54:49.228 killing process with pid 2431785 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2431785 00:54:49.228 Received shutdown signal, test time was about 10.000000 seconds 00:54:49.228 00:54:49.228 Latency(us) 00:54:49.228 [2024-12-09T10:05:50.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:49.228 [2024-12-09T10:05:50.404Z] =================================================================================================================== 00:54:49.228 [2024-12-09T10:05:50.404Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2431785 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2431598 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2431598 ']' 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2431598 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:49.228 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431598 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431598' 00:54:49.498 killing process with pid 2431598 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2431598 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2431598 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2433223 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2433223 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2433223 ']' 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:49.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:49.498 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:49.772 [2024-12-09 11:05:50.696152] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:49.772 [2024-12-09 11:05:50.696220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:49.772 [2024-12-09 11:05:50.811234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:49.772 [2024-12-09 11:05:50.863753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:49.772 [2024-12-09 11:05:50.863805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:49.772 [2024-12-09 11:05:50.863824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:49.772 [2024-12-09 11:05:50.863838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:49.772 [2024-12-09 11:05:50.863850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:49.772 [2024-12-09 11:05:50.864457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:50.723 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:50.723 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:50.723 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:50.723 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:50.723 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:50.723 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:50.723 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.rociq3ElhS 00:54:50.723 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rociq3ElhS 00:54:50.723 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:54:50.723 [2024-12-09 11:05:51.850838] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:50.723 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:54:50.981 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:54:51.239 [2024-12-09 11:05:52.235821] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:54:51.239 [2024-12-09 11:05:52.236077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:51.239 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:54:51.497 malloc0 00:54:51.497 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:54:51.497 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rociq3ElhS 00:54:52.064 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:54:52.064 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:54:52.064 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2433612 00:54:52.064 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:52.064 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2433612 /var/tmp/bdevperf.sock 00:54:52.064 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2433612 ']' 00:54:52.064 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:52.064 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:52.064 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:52.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:52.064 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:52.064 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:52.064 [2024-12-09 11:05:53.166576] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:52.064 [2024-12-09 11:05:53.166641] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433612 ] 00:54:52.323 [2024-12-09 11:05:53.247730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:52.323 [2024-12-09 11:05:53.290202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:52.323 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:52.323 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:52.323 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rociq3ElhS 00:54:52.581 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:54:52.581 [2024-12-09 11:05:53.740453] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:52.839 nvme0n1 00:54:52.839 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:54:52.839 Running I/O for 1 seconds... 00:54:54.033 3247.00 IOPS, 12.68 MiB/s 00:54:54.033 Latency(us) 00:54:54.033 [2024-12-09T10:05:55.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:54.033 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:54:54.033 Verification LBA range: start 0x0 length 0x2000 00:54:54.033 nvme0n1 : 1.04 3240.91 12.66 0.00 0.00 39078.54 7294.44 74768.03 00:54:54.033 [2024-12-09T10:05:55.209Z] =================================================================================================================== 00:54:54.033 [2024-12-09T10:05:55.209Z] Total : 3240.91 12.66 0.00 0.00 39078.54 7294.44 74768.03 00:54:54.033 { 00:54:54.033 "results": [ 00:54:54.033 { 00:54:54.033 "job": "nvme0n1", 00:54:54.033 "core_mask": "0x2", 00:54:54.033 "workload": "verify", 00:54:54.033 "status": "finished", 00:54:54.033 "verify_range": { 00:54:54.033 "start": 0, 00:54:54.033 "length": 8192 00:54:54.033 }, 00:54:54.033 "queue_depth": 128, 00:54:54.033 "io_size": 4096, 00:54:54.033 "runtime": 1.041374, 00:54:54.033 "iops": 3240.910566232689, 00:54:54.033 "mibps": 12.65980689934644, 00:54:54.033 "io_failed": 0, 00:54:54.033 "io_timeout": 0, 00:54:54.033 "avg_latency_us": 39078.54481107891, 00:54:54.033 "min_latency_us": 7294.441739130435, 00:54:54.033 "max_latency_us": 74768.02782608695 00:54:54.033 } 00:54:54.033 ], 00:54:54.033 "core_count": 1 00:54:54.033 } 00:54:54.033 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2433612 00:54:54.033 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2433612 ']' 00:54:54.033 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2433612 00:54:54.033 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:54.033 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:54.033 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2433612 00:54:54.033 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:54.033 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:54.033 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2433612' 00:54:54.033 killing process with pid 2433612 00:54:54.033 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2433612 00:54:54.034 Received shutdown signal, test time was about 1.000000 seconds 00:54:54.034 00:54:54.034 Latency(us) 00:54:54.034 [2024-12-09T10:05:55.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:54.034 [2024-12-09T10:05:55.210Z] =================================================================================================================== 00:54:54.034 [2024-12-09T10:05:55.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:54:54.034 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2433612 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2433223 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2433223 ']' 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2433223 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2433223 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2433223' 00:54:54.293 killing process with pid 2433223 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2433223 00:54:54.293 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2433223 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2433953 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2433953 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2433953 ']' 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:54.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:54.551 11:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:54.551 [2024-12-09 11:05:55.636146] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:54.551 [2024-12-09 11:05:55.636227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:54.810 [2024-12-09 11:05:55.767316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:54.810 [2024-12-09 11:05:55.816558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:54.810 [2024-12-09 11:05:55.816616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:54.810 [2024-12-09 11:05:55.816632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:54.810 [2024-12-09 11:05:55.816650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:54.810 [2024-12-09 11:05:55.816662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:54.810 [2024-12-09 11:05:55.817276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:55.376 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:55.376 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:55.376 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:55.376 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:55.376 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:55.376 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:55.376 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:54:55.376 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.376 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:55.376 [2024-12-09 11:05:56.515345] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:55.376 malloc0 00:54:55.376 [2024-12-09 11:05:56.545659] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:54:55.376 [2024-12-09 11:05:56.545916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:55.634 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.634 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2434002 00:54:55.634 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2434002 /var/tmp/bdevperf.sock 00:54:55.635 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2434002 ']' 00:54:55.635 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:55.635 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:55.635 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:55.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:55.635 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:55.635 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:55.635 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:54:55.635 [2024-12-09 11:05:56.632744] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:55.635 [2024-12-09 11:05:56.632812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434002 ] 00:54:55.635 [2024-12-09 11:05:56.729374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:55.635 [2024-12-09 11:05:56.771586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:55.893 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:55.893 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:55.893 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rociq3ElhS 00:54:55.893 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:54:56.151 [2024-12-09 11:05:57.318708] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:54:56.409 nvme0n1 00:54:56.409 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:54:56.409 Running I/O for 1 seconds... 00:54:57.784 4261.00 IOPS, 16.64 MiB/s 00:54:57.784 Latency(us) 00:54:57.784 [2024-12-09T10:05:58.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:57.784 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:54:57.784 Verification LBA range: start 0x0 length 0x2000 00:54:57.784 nvme0n1 : 1.03 4281.17 16.72 0.00 0.00 29589.72 8548.17 36244.26 00:54:57.784 [2024-12-09T10:05:58.960Z] =================================================================================================================== 00:54:57.784 [2024-12-09T10:05:58.960Z] Total : 4281.17 16.72 0.00 0.00 29589.72 8548.17 36244.26 00:54:57.784 { 00:54:57.784 "results": [ 00:54:57.784 { 00:54:57.784 "job": "nvme0n1", 00:54:57.784 "core_mask": "0x2", 00:54:57.784 "workload": "verify", 00:54:57.784 "status": "finished", 00:54:57.784 "verify_range": { 00:54:57.784 "start": 0, 00:54:57.784 "length": 8192 00:54:57.784 }, 00:54:57.784 "queue_depth": 128, 00:54:57.784 "io_size": 4096, 00:54:57.784 "runtime": 1.025421, 00:54:57.784 "iops": 4281.168417654798, 00:54:57.784 "mibps": 16.723314131464054, 00:54:57.784 "io_failed": 0, 00:54:57.784 "io_timeout": 0, 00:54:57.784 "avg_latency_us": 29589.721436466276, 00:54:57.784 "min_latency_us": 8548.173913043478, 00:54:57.784 "max_latency_us": 36244.257391304345 00:54:57.784 } 00:54:57.784 ], 00:54:57.784 "core_count": 1 00:54:57.784 } 00:54:57.784 11:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:54:57.784 11:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.784 11:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:57.784 11:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.784 11:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:54:57.784 "subsystems": [ 00:54:57.784 { 00:54:57.784 "subsystem": "keyring", 00:54:57.784 "config": [ 00:54:57.784 { 00:54:57.784 "method": "keyring_file_add_key", 00:54:57.784 "params": { 00:54:57.784 "name": "key0", 00:54:57.784 "path": "/tmp/tmp.rociq3ElhS" 00:54:57.784 } 00:54:57.784 } 00:54:57.784 ] 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "subsystem": "iobuf", 00:54:57.784 "config": [ 00:54:57.784 { 00:54:57.784 "method": "iobuf_set_options", 00:54:57.784 "params": { 00:54:57.784 "small_pool_count": 8192, 00:54:57.784 "large_pool_count": 1024, 00:54:57.784 "small_bufsize": 8192, 00:54:57.784 "large_bufsize": 135168, 00:54:57.784 "enable_numa": false 00:54:57.784 } 00:54:57.784 } 00:54:57.784 ] 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "subsystem": "sock", 00:54:57.784 "config": [ 00:54:57.784 { 00:54:57.784 "method": "sock_set_default_impl", 00:54:57.784 "params": { 00:54:57.784 "impl_name": "posix" 00:54:57.784 } 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "method": "sock_impl_set_options", 00:54:57.784 "params": { 00:54:57.784 "impl_name": "ssl", 00:54:57.784 "recv_buf_size": 4096, 00:54:57.784 "send_buf_size": 4096, 00:54:57.784 "enable_recv_pipe": true, 00:54:57.784 "enable_quickack": false, 00:54:57.784 "enable_placement_id": 0, 00:54:57.784 "enable_zerocopy_send_server": true, 00:54:57.784 "enable_zerocopy_send_client": false, 00:54:57.784 "zerocopy_threshold": 0, 00:54:57.784 "tls_version": 0, 00:54:57.784 "enable_ktls": false 00:54:57.784 } 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "method": "sock_impl_set_options", 00:54:57.784 "params": { 00:54:57.784 "impl_name": "posix", 00:54:57.784 "recv_buf_size": 2097152, 00:54:57.784 "send_buf_size": 2097152, 00:54:57.784 "enable_recv_pipe": true, 00:54:57.784 "enable_quickack": false, 00:54:57.784 "enable_placement_id": 0, 00:54:57.784 "enable_zerocopy_send_server": true, 00:54:57.784 "enable_zerocopy_send_client": false, 00:54:57.784 "zerocopy_threshold": 0, 00:54:57.784 "tls_version": 0, 00:54:57.784 "enable_ktls": false 00:54:57.784 } 00:54:57.784 } 00:54:57.784 ] 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "subsystem": "vmd", 00:54:57.784 "config": [] 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "subsystem": "accel", 00:54:57.784 "config": [ 00:54:57.784 { 00:54:57.784 "method": "accel_set_options", 00:54:57.784 "params": { 00:54:57.784 "small_cache_size": 128, 00:54:57.784 "large_cache_size": 16, 00:54:57.784 "task_count": 2048, 00:54:57.784 "sequence_count": 2048, 00:54:57.784 "buf_count": 2048 00:54:57.784 } 00:54:57.784 } 00:54:57.784 ] 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "subsystem": "bdev", 00:54:57.784 "config": [ 00:54:57.784 { 00:54:57.784 "method": "bdev_set_options", 00:54:57.784 "params": { 00:54:57.784 "bdev_io_pool_size": 65535, 00:54:57.784 "bdev_io_cache_size": 256, 00:54:57.784 "bdev_auto_examine": true, 00:54:57.784 "iobuf_small_cache_size": 128, 00:54:57.784 "iobuf_large_cache_size": 16 00:54:57.784 } 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "method": "bdev_raid_set_options", 00:54:57.784 "params": { 00:54:57.784 "process_window_size_kb": 1024, 00:54:57.784 "process_max_bandwidth_mb_sec": 0 00:54:57.784 } 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "method": "bdev_iscsi_set_options", 00:54:57.784 "params": { 00:54:57.784 "timeout_sec": 30 00:54:57.784 } 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "method": "bdev_nvme_set_options", 00:54:57.784 "params": { 00:54:57.784 "action_on_timeout": "none", 00:54:57.784 "timeout_us": 0, 00:54:57.784 "timeout_admin_us": 0, 00:54:57.784 "keep_alive_timeout_ms": 10000, 00:54:57.784 "arbitration_burst": 0, 00:54:57.784 "low_priority_weight": 0, 00:54:57.784 "medium_priority_weight": 0, 00:54:57.784 "high_priority_weight": 0, 00:54:57.784 "nvme_adminq_poll_period_us": 10000, 00:54:57.784 "nvme_ioq_poll_period_us": 0, 00:54:57.784 "io_queue_requests": 0, 00:54:57.784 "delay_cmd_submit": true, 00:54:57.784 "transport_retry_count": 4, 00:54:57.784 "bdev_retry_count": 3, 00:54:57.784 "transport_ack_timeout": 0, 00:54:57.784 "ctrlr_loss_timeout_sec": 0, 00:54:57.784 "reconnect_delay_sec": 0, 00:54:57.784 "fast_io_fail_timeout_sec": 0, 00:54:57.784 "disable_auto_failback": false, 00:54:57.784 "generate_uuids": false, 00:54:57.784 "transport_tos": 0, 00:54:57.784 "nvme_error_stat": false, 00:54:57.784 "rdma_srq_size": 0, 00:54:57.784 "io_path_stat": false, 00:54:57.784 "allow_accel_sequence": false, 00:54:57.784 "rdma_max_cq_size": 0, 00:54:57.784 "rdma_cm_event_timeout_ms": 0, 00:54:57.784 "dhchap_digests": [ 00:54:57.784 "sha256", 00:54:57.784 "sha384", 00:54:57.784 "sha512" 00:54:57.784 ], 00:54:57.784 "dhchap_dhgroups": [ 00:54:57.784 "null", 00:54:57.784 "ffdhe2048", 00:54:57.784 "ffdhe3072", 00:54:57.784 "ffdhe4096", 00:54:57.784 "ffdhe6144", 00:54:57.784 "ffdhe8192" 00:54:57.784 ] 00:54:57.784 } 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "method": "bdev_nvme_set_hotplug", 00:54:57.784 "params": { 00:54:57.784 "period_us": 100000, 00:54:57.784 "enable": false 00:54:57.784 } 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "method": "bdev_malloc_create", 00:54:57.784 "params": { 00:54:57.784 "name": "malloc0", 00:54:57.784 "num_blocks": 8192, 00:54:57.784 "block_size": 4096, 00:54:57.784 "physical_block_size": 4096, 00:54:57.784 "uuid": "5745832b-9f8b-4ed3-81df-c9001252a662", 00:54:57.784 "optimal_io_boundary": 0, 00:54:57.784 "md_size": 0, 00:54:57.784 "dif_type": 0, 00:54:57.784 "dif_is_head_of_md": false, 00:54:57.784 "dif_pi_format": 0 00:54:57.784 } 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "method": "bdev_wait_for_examine" 00:54:57.784 } 00:54:57.784 ] 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "subsystem": "nbd", 00:54:57.784 "config": [] 00:54:57.784 }, 00:54:57.784 { 00:54:57.784 "subsystem": "scheduler", 00:54:57.784 "config": [ 00:54:57.784 { 00:54:57.784 "method": "framework_set_scheduler", 00:54:57.784 "params": { 00:54:57.784 "name": "static" 00:54:57.784 } 00:54:57.784 } 00:54:57.784 ] 00:54:57.785 }, 00:54:57.785 { 00:54:57.785 "subsystem": "nvmf", 00:54:57.785 "config": [ 00:54:57.785 { 00:54:57.785 "method": "nvmf_set_config", 00:54:57.785 "params": { 00:54:57.785 "discovery_filter": "match_any", 00:54:57.785 "admin_cmd_passthru": { 00:54:57.785 "identify_ctrlr": false 00:54:57.785 }, 00:54:57.785 "dhchap_digests": [ 00:54:57.785 "sha256", 00:54:57.785 "sha384", 00:54:57.785 "sha512" 00:54:57.785 ], 00:54:57.785 "dhchap_dhgroups": [ 00:54:57.785 "null", 00:54:57.785 "ffdhe2048", 00:54:57.785 "ffdhe3072", 00:54:57.785 "ffdhe4096", 00:54:57.785 "ffdhe6144", 00:54:57.785 "ffdhe8192" 00:54:57.785 ] 00:54:57.785 } 00:54:57.785 }, 00:54:57.785 { 00:54:57.785 "method": "nvmf_set_max_subsystems", 00:54:57.785 "params": { 00:54:57.785 "max_subsystems": 1024 00:54:57.785 } 00:54:57.785 }, 00:54:57.785 { 00:54:57.785 "method": "nvmf_set_crdt", 00:54:57.785 "params": { 00:54:57.785 "crdt1": 0, 00:54:57.785 "crdt2": 0, 00:54:57.785 "crdt3": 0 00:54:57.785 } 00:54:57.785 }, 00:54:57.785 { 00:54:57.785 "method": "nvmf_create_transport", 00:54:57.785 "params": { 00:54:57.785 "trtype": "TCP", 00:54:57.785 "max_queue_depth": 128, 00:54:57.785 "max_io_qpairs_per_ctrlr": 127, 00:54:57.785 "in_capsule_data_size": 4096, 00:54:57.785 "max_io_size": 131072, 00:54:57.785 "io_unit_size": 131072, 00:54:57.785 "max_aq_depth": 128, 00:54:57.785 "num_shared_buffers": 511, 00:54:57.785 "buf_cache_size": 4294967295, 00:54:57.785 "dif_insert_or_strip": false, 00:54:57.785 "zcopy": false, 00:54:57.785 "c2h_success": false, 00:54:57.785 "sock_priority": 0, 00:54:57.785 "abort_timeout_sec": 1, 00:54:57.785 "ack_timeout": 0, 00:54:57.785 "data_wr_pool_size": 0 00:54:57.785 } 00:54:57.785 }, 00:54:57.785 { 00:54:57.785 "method": "nvmf_create_subsystem", 00:54:57.785 "params": { 00:54:57.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:57.785 "allow_any_host": false, 00:54:57.785 "serial_number": "00000000000000000000", 00:54:57.785 "model_number": "SPDK bdev Controller", 00:54:57.785 "max_namespaces": 32, 00:54:57.785 "min_cntlid": 1, 00:54:57.785 "max_cntlid": 65519, 00:54:57.785 "ana_reporting": false 00:54:57.785 } 00:54:57.785 }, 00:54:57.785 { 00:54:57.785 "method": "nvmf_subsystem_add_host", 00:54:57.785 "params": { 00:54:57.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:57.785 "host": "nqn.2016-06.io.spdk:host1", 00:54:57.785 "psk": "key0" 00:54:57.785 } 00:54:57.785 }, 00:54:57.785 { 00:54:57.785 "method": "nvmf_subsystem_add_ns", 00:54:57.785 "params": { 00:54:57.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:57.785 "namespace": { 00:54:57.785 "nsid": 1, 00:54:57.785 "bdev_name": "malloc0", 00:54:57.785 "nguid": "5745832B9F8B4ED381DFC9001252A662", 00:54:57.785 "uuid": "5745832b-9f8b-4ed3-81df-c9001252a662", 00:54:57.785 "no_auto_visible": false 00:54:57.785 } 00:54:57.785 } 00:54:57.785 }, 00:54:57.785 { 00:54:57.785 "method": "nvmf_subsystem_add_listener", 00:54:57.785 "params": { 00:54:57.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:57.785 "listen_address": { 00:54:57.785 "trtype": "TCP", 00:54:57.785 "adrfam": "IPv4", 00:54:57.785 "traddr": "10.0.0.2", 00:54:57.785 "trsvcid": "4420" 00:54:57.785 }, 00:54:57.785 "secure_channel": false, 00:54:57.785 "sock_impl": "ssl" 00:54:57.785 } 00:54:57.785 } 00:54:57.785 ] 00:54:57.785 } 00:54:57.785 ] 00:54:57.785 }' 00:54:57.785 11:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:54:58.044 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:54:58.044 "subsystems": [ 00:54:58.044 { 00:54:58.044 "subsystem": "keyring", 00:54:58.044 "config": [ 00:54:58.044 { 00:54:58.044 "method": "keyring_file_add_key", 00:54:58.044 "params": { 00:54:58.044 "name": "key0", 00:54:58.044 "path": "/tmp/tmp.rociq3ElhS" 00:54:58.044 } 00:54:58.044 } 00:54:58.044 ] 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "subsystem": "iobuf", 00:54:58.044 "config": [ 00:54:58.044 { 00:54:58.044 "method": "iobuf_set_options", 00:54:58.044 "params": { 00:54:58.044 "small_pool_count": 8192, 00:54:58.044 "large_pool_count": 1024, 00:54:58.044 "small_bufsize": 8192, 00:54:58.044 "large_bufsize": 135168, 00:54:58.044 "enable_numa": false 00:54:58.044 } 00:54:58.044 } 00:54:58.044 ] 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "subsystem": "sock", 00:54:58.044 "config": [ 00:54:58.044 { 00:54:58.044 "method": "sock_set_default_impl", 00:54:58.044 "params": { 00:54:58.044 "impl_name": "posix" 00:54:58.044 } 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "method": "sock_impl_set_options", 00:54:58.044 "params": { 00:54:58.044 "impl_name": "ssl", 00:54:58.044 "recv_buf_size": 4096, 00:54:58.044 "send_buf_size": 4096, 00:54:58.044 "enable_recv_pipe": true, 00:54:58.044 "enable_quickack": false, 00:54:58.044 "enable_placement_id": 0, 00:54:58.044 "enable_zerocopy_send_server": true, 00:54:58.044 "enable_zerocopy_send_client": false, 00:54:58.044 "zerocopy_threshold": 0, 00:54:58.044 "tls_version": 0, 00:54:58.044 "enable_ktls": false 00:54:58.044 } 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "method": "sock_impl_set_options", 00:54:58.044 "params": { 00:54:58.044 "impl_name": "posix", 00:54:58.044 "recv_buf_size": 2097152, 00:54:58.044 "send_buf_size": 2097152, 00:54:58.044 "enable_recv_pipe": true, 00:54:58.044 "enable_quickack": false, 00:54:58.044 "enable_placement_id": 0, 00:54:58.044 "enable_zerocopy_send_server": true, 00:54:58.044 "enable_zerocopy_send_client": false, 00:54:58.044 "zerocopy_threshold": 0, 00:54:58.044 "tls_version": 0, 00:54:58.044 "enable_ktls": false 00:54:58.044 } 00:54:58.044 } 00:54:58.044 ] 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "subsystem": "vmd", 00:54:58.044 "config": [] 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "subsystem": "accel", 00:54:58.044 "config": [ 00:54:58.044 { 00:54:58.044 "method": "accel_set_options", 00:54:58.044 "params": { 00:54:58.044 "small_cache_size": 128, 00:54:58.044 "large_cache_size": 16, 00:54:58.044 "task_count": 2048, 00:54:58.044 "sequence_count": 2048, 00:54:58.044 "buf_count": 2048 00:54:58.044 } 00:54:58.044 } 00:54:58.044 ] 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "subsystem": "bdev", 00:54:58.044 "config": [ 00:54:58.044 { 00:54:58.044 "method": "bdev_set_options", 00:54:58.044 "params": { 00:54:58.044 "bdev_io_pool_size": 65535, 00:54:58.044 "bdev_io_cache_size": 256, 00:54:58.044 "bdev_auto_examine": true, 00:54:58.044 "iobuf_small_cache_size": 128, 00:54:58.044 "iobuf_large_cache_size": 16 00:54:58.044 } 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "method": "bdev_raid_set_options", 00:54:58.044 "params": { 00:54:58.044 "process_window_size_kb": 1024, 00:54:58.044 "process_max_bandwidth_mb_sec": 0 00:54:58.044 } 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "method": "bdev_iscsi_set_options", 00:54:58.044 "params": { 00:54:58.044 "timeout_sec": 30 00:54:58.044 } 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "method": "bdev_nvme_set_options", 00:54:58.044 "params": { 00:54:58.044 "action_on_timeout": "none", 00:54:58.044 "timeout_us": 0, 00:54:58.044 "timeout_admin_us": 0, 00:54:58.044 "keep_alive_timeout_ms": 10000, 00:54:58.044 "arbitration_burst": 0, 00:54:58.044 "low_priority_weight": 0, 00:54:58.044 "medium_priority_weight": 0, 00:54:58.044 "high_priority_weight": 0, 00:54:58.044 "nvme_adminq_poll_period_us": 10000, 00:54:58.044 "nvme_ioq_poll_period_us": 0, 00:54:58.044 "io_queue_requests": 512, 00:54:58.044 "delay_cmd_submit": true, 00:54:58.044 "transport_retry_count": 4, 00:54:58.044 "bdev_retry_count": 3, 00:54:58.044 "transport_ack_timeout": 0, 00:54:58.044 "ctrlr_loss_timeout_sec": 0, 00:54:58.044 "reconnect_delay_sec": 0, 00:54:58.044 "fast_io_fail_timeout_sec": 0, 00:54:58.044 "disable_auto_failback": false, 00:54:58.044 "generate_uuids": false, 00:54:58.044 "transport_tos": 0, 00:54:58.044 "nvme_error_stat": false, 00:54:58.044 "rdma_srq_size": 0, 00:54:58.044 "io_path_stat": false, 00:54:58.044 "allow_accel_sequence": false, 00:54:58.044 "rdma_max_cq_size": 0, 00:54:58.044 "rdma_cm_event_timeout_ms": 0, 00:54:58.044 "dhchap_digests": [ 00:54:58.044 "sha256", 00:54:58.044 "sha384", 00:54:58.044 "sha512" 00:54:58.044 ], 00:54:58.044 "dhchap_dhgroups": [ 00:54:58.044 "null", 00:54:58.044 "ffdhe2048", 00:54:58.044 "ffdhe3072", 00:54:58.044 "ffdhe4096", 00:54:58.044 "ffdhe6144", 00:54:58.044 "ffdhe8192" 00:54:58.044 ] 00:54:58.044 } 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "method": "bdev_nvme_attach_controller", 00:54:58.044 "params": { 00:54:58.044 "name": "nvme0", 00:54:58.044 "trtype": "TCP", 00:54:58.044 "adrfam": "IPv4", 00:54:58.044 "traddr": "10.0.0.2", 00:54:58.044 "trsvcid": "4420", 00:54:58.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:58.044 "prchk_reftag": false, 00:54:58.044 "prchk_guard": false, 00:54:58.044 "ctrlr_loss_timeout_sec": 0, 00:54:58.044 "reconnect_delay_sec": 0, 00:54:58.044 "fast_io_fail_timeout_sec": 0, 00:54:58.044 "psk": "key0", 00:54:58.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:58.044 "hdgst": false, 00:54:58.044 "ddgst": false, 00:54:58.044 "multipath": "multipath" 00:54:58.044 } 00:54:58.044 }, 00:54:58.044 { 00:54:58.044 "method": "bdev_nvme_set_hotplug", 00:54:58.044 "params": { 00:54:58.044 "period_us": 100000, 00:54:58.044 "enable": false 00:54:58.044 } 00:54:58.044 }, 00:54:58.044 { 00:54:58.045 "method": "bdev_enable_histogram", 00:54:58.045 "params": { 00:54:58.045 "name": "nvme0n1", 00:54:58.045 "enable": true 00:54:58.045 } 00:54:58.045 }, 00:54:58.045 { 00:54:58.045 "method": "bdev_wait_for_examine" 00:54:58.045 } 00:54:58.045 ] 00:54:58.045 }, 00:54:58.045 { 00:54:58.045 "subsystem": "nbd", 00:54:58.045 "config": [] 00:54:58.045 } 00:54:58.045 ] 00:54:58.045 }' 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2434002 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2434002 ']' 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2434002 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2434002 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2434002' 00:54:58.045 killing process with pid 2434002 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2434002 00:54:58.045 Received shutdown signal, test time was about 1.000000 seconds 00:54:58.045 00:54:58.045 Latency(us) 00:54:58.045 [2024-12-09T10:05:59.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:58.045 [2024-12-09T10:05:59.221Z] =================================================================================================================== 00:54:58.045 [2024-12-09T10:05:59.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:54:58.045 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2434002 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2433953 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2433953 ']' 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2433953 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2433953 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2433953' 00:54:58.303 killing process with pid 2433953 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2433953 00:54:58.303 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2433953 00:54:58.562 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:54:58.562 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:58.562 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:58.562 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:54:58.562 "subsystems": [ 00:54:58.562 { 00:54:58.562 "subsystem": "keyring", 00:54:58.562 "config": [ 00:54:58.562 { 00:54:58.562 "method": "keyring_file_add_key", 00:54:58.562 "params": { 00:54:58.562 "name": "key0", 00:54:58.562 "path": "/tmp/tmp.rociq3ElhS" 00:54:58.562 } 00:54:58.562 } 00:54:58.562 ] 00:54:58.562 }, 00:54:58.562 { 00:54:58.562 "subsystem": "iobuf", 00:54:58.562 "config": [ 00:54:58.562 { 00:54:58.562 "method": "iobuf_set_options", 00:54:58.562 "params": { 00:54:58.562 "small_pool_count": 8192, 00:54:58.562 "large_pool_count": 1024, 00:54:58.562 "small_bufsize": 8192, 00:54:58.562 "large_bufsize": 135168, 00:54:58.562 "enable_numa": false 00:54:58.562 } 00:54:58.562 } 00:54:58.562 ] 00:54:58.562 }, 00:54:58.562 { 00:54:58.562 "subsystem": "sock", 00:54:58.562 "config": [ 00:54:58.562 { 00:54:58.562 "method": "sock_set_default_impl", 00:54:58.562 "params": { 00:54:58.562 "impl_name": "posix" 00:54:58.562 } 00:54:58.562 }, 00:54:58.562 { 00:54:58.562 "method": "sock_impl_set_options", 00:54:58.562 "params": { 00:54:58.562 "impl_name": "ssl", 00:54:58.562 "recv_buf_size": 4096, 00:54:58.562 "send_buf_size": 4096, 00:54:58.562 "enable_recv_pipe": true, 00:54:58.562 "enable_quickack": false, 00:54:58.562 "enable_placement_id": 0, 00:54:58.562 "enable_zerocopy_send_server": true, 00:54:58.562 "enable_zerocopy_send_client": false, 00:54:58.562 "zerocopy_threshold": 0, 00:54:58.562 "tls_version": 0, 00:54:58.562 "enable_ktls": false 00:54:58.562 } 00:54:58.562 }, 00:54:58.562 { 00:54:58.562 "method": "sock_impl_set_options", 00:54:58.562 "params": { 00:54:58.562 "impl_name": "posix", 00:54:58.562 "recv_buf_size": 2097152, 00:54:58.562 "send_buf_size": 2097152, 00:54:58.562 "enable_recv_pipe": true, 00:54:58.562 "enable_quickack": false, 00:54:58.562 "enable_placement_id": 0, 00:54:58.562 "enable_zerocopy_send_server": true, 00:54:58.562 "enable_zerocopy_send_client": false, 00:54:58.562 "zerocopy_threshold": 0, 00:54:58.562 "tls_version": 0, 00:54:58.562 "enable_ktls": false 00:54:58.562 } 00:54:58.562 } 00:54:58.562 ] 00:54:58.562 }, 00:54:58.562 { 00:54:58.562 "subsystem": "vmd", 00:54:58.562 "config": [] 00:54:58.562 }, 00:54:58.562 { 00:54:58.562 "subsystem": "accel", 00:54:58.562 "config": [ 00:54:58.562 { 00:54:58.562 "method": "accel_set_options", 00:54:58.562 "params": { 00:54:58.562 "small_cache_size": 128, 00:54:58.562 "large_cache_size": 16, 00:54:58.562 "task_count": 2048, 00:54:58.562 "sequence_count": 2048, 00:54:58.562 "buf_count": 2048 00:54:58.562 } 00:54:58.562 } 00:54:58.562 ] 00:54:58.562 }, 00:54:58.562 { 00:54:58.562 "subsystem": "bdev", 00:54:58.562 "config": [ 00:54:58.562 { 00:54:58.562 "method": "bdev_set_options", 00:54:58.562 "params": { 00:54:58.562 "bdev_io_pool_size": 65535, 00:54:58.562 "bdev_io_cache_size": 256, 00:54:58.562 "bdev_auto_examine": true, 00:54:58.562 "iobuf_small_cache_size": 128, 00:54:58.562 "iobuf_large_cache_size": 16 00:54:58.562 } 00:54:58.562 }, 00:54:58.562 { 00:54:58.562 "method": "bdev_raid_set_options", 00:54:58.562 "params": { 00:54:58.562 "process_window_size_kb": 1024, 00:54:58.562 "process_max_bandwidth_mb_sec": 0 00:54:58.562 } 00:54:58.562 }, 00:54:58.562 { 00:54:58.562 "method": "bdev_iscsi_set_options", 00:54:58.562 "params": { 00:54:58.562 "timeout_sec": 30 00:54:58.562 } 00:54:58.562 }, 00:54:58.562 { 00:54:58.562 "method": "bdev_nvme_set_options", 00:54:58.562 "params": { 00:54:58.562 "action_on_timeout": "none", 00:54:58.562 "timeout_us": 0, 00:54:58.562 "timeout_admin_us": 0, 00:54:58.562 "keep_alive_timeout_ms": 10000, 00:54:58.562 "arbitration_burst": 0, 00:54:58.562 "low_priority_weight": 0, 00:54:58.562 "medium_priority_weight": 0, 00:54:58.562 "high_priority_weight": 0, 00:54:58.562 "nvme_adminq_poll_period_us": 10000, 00:54:58.562 "nvme_ioq_poll_period_us": 0, 00:54:58.562 "io_queue_requests": 0, 00:54:58.562 "delay_cmd_submit": true, 00:54:58.562 "transport_retry_count": 4, 00:54:58.562 "bdev_retry_count": 3, 00:54:58.562 "transport_ack_timeout": 0, 00:54:58.562 "ctrlr_loss_timeout_sec": 0, 00:54:58.562 "reconnect_delay_sec": 0, 00:54:58.562 "fast_io_fail_timeout_sec": 0, 00:54:58.563 "disable_auto_failback": false, 00:54:58.563 "generate_uuids": false, 00:54:58.563 "transport_tos": 0, 00:54:58.563 "nvme_error_stat": false, 00:54:58.563 "rdma_srq_size": 0, 00:54:58.563 "io_path_stat": false, 00:54:58.563 "allow_accel_sequence": false, 00:54:58.563 "rdma_max_cq_size": 0, 00:54:58.563 "rdma_cm_event_timeout_ms": 0, 00:54:58.563 "dhchap_digests": [ 00:54:58.563 "sha256", 00:54:58.563 "sha384", 00:54:58.563 "sha512" 00:54:58.563 ], 00:54:58.563 "dhchap_dhgroups": [ 00:54:58.563 "null", 00:54:58.563 "ffdhe2048", 00:54:58.563 "ffdhe3072", 00:54:58.563 "ffdhe4096", 00:54:58.563 "ffdhe6144", 00:54:58.563 "ffdhe8192" 00:54:58.563 ] 00:54:58.563 } 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "method": "bdev_nvme_set_hotplug", 00:54:58.563 "params": { 00:54:58.563 "period_us": 100000, 00:54:58.563 "enable": false 00:54:58.563 } 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "method": "bdev_malloc_create", 00:54:58.563 "params": { 00:54:58.563 "name": "malloc0", 00:54:58.563 "num_blocks": 8192, 00:54:58.563 "block_size": 4096, 00:54:58.563 "physical_block_size": 4096, 00:54:58.563 "uuid": "5745832b-9f8b-4ed3-81df-c9001252a662", 00:54:58.563 "optimal_io_boundary": 0, 00:54:58.563 "md_size": 0, 00:54:58.563 "dif_type": 0, 00:54:58.563 "dif_is_head_of_md": false, 00:54:58.563 "dif_pi_format": 0 00:54:58.563 } 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "method": "bdev_wait_for_examine" 00:54:58.563 } 00:54:58.563 ] 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "subsystem": "nbd", 00:54:58.563 "config": [] 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "subsystem": "scheduler", 00:54:58.563 "config": [ 00:54:58.563 { 00:54:58.563 "method": "framework_set_scheduler", 00:54:58.563 "params": { 00:54:58.563 "name": "static" 00:54:58.563 } 00:54:58.563 } 00:54:58.563 ] 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "subsystem": "nvmf", 00:54:58.563 "config": [ 00:54:58.563 { 00:54:58.563 "method": "nvmf_set_config", 00:54:58.563 "params": { 00:54:58.563 "discovery_filter": "match_any", 00:54:58.563 "admin_cmd_passthru": { 00:54:58.563 "identify_ctrlr": false 00:54:58.563 }, 00:54:58.563 "dhchap_digests": [ 00:54:58.563 "sha256", 00:54:58.563 "sha384", 00:54:58.563 "sha512" 00:54:58.563 ], 00:54:58.563 "dhchap_dhgroups": [ 00:54:58.563 "null", 00:54:58.563 "ffdhe2048", 00:54:58.563 "ffdhe3072", 00:54:58.563 "ffdhe4096", 00:54:58.563 "ffdhe6144", 00:54:58.563 "ffdhe8192" 00:54:58.563 ] 00:54:58.563 } 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "method": "nvmf_set_max_subsystems", 00:54:58.563 "params": { 00:54:58.563 "max_subsystems": 1024 00:54:58.563 } 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "method": "nvmf_set_crdt", 00:54:58.563 "params": { 00:54:58.563 "crdt1": 0, 00:54:58.563 "crdt2": 0, 00:54:58.563 "crdt3": 0 00:54:58.563 } 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "method": "nvmf_create_transport", 00:54:58.563 "params": { 00:54:58.563 "trtype": "TCP", 00:54:58.563 "max_queue_depth": 128, 00:54:58.563 "max_io_qpairs_per_ctrlr": 127, 00:54:58.563 "in_capsule_data_size": 4096, 00:54:58.563 "max_io_size": 131072, 00:54:58.563 "io_unit_size": 131072, 00:54:58.563 "max_aq_depth": 128, 00:54:58.563 "num_shared_buffers": 511, 00:54:58.563 "buf_cache_size": 4294967295, 00:54:58.563 "dif_insert_or_strip": false, 00:54:58.563 "zcopy": false, 00:54:58.563 "c2h_success": false, 00:54:58.563 "sock_priority": 0, 00:54:58.563 "abort_timeout_sec": 1, 00:54:58.563 "ack_timeout": 0, 00:54:58.563 "data_wr_pool_size": 0 00:54:58.563 } 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "method": "nvmf_create_subsystem", 00:54:58.563 "params": { 00:54:58.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:58.563 "allow_any_host": false, 00:54:58.563 "serial_number": "00000000000000000000", 00:54:58.563 "model_number": "SPDK bdev Controller", 00:54:58.563 "max_namespaces": 32, 00:54:58.563 "min_cntlid": 1, 00:54:58.563 "max_cntlid": 65519, 00:54:58.563 "ana_reporting": false 00:54:58.563 } 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "method": "nvmf_subsystem_add_host", 00:54:58.563 "params": { 00:54:58.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:58.563 "host": "nqn.2016-06.io.spdk:host1", 00:54:58.563 "psk": "key0" 00:54:58.563 } 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "method": "nvmf_subsystem_add_ns", 00:54:58.563 "params": { 00:54:58.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:58.563 "namespace": { 00:54:58.563 "nsid": 1, 00:54:58.563 "bdev_name": "malloc0", 00:54:58.563 "nguid": "5745832B9F8B4ED381DFC9001252A662", 00:54:58.563 "uuid": "5745832b-9f8b-4ed3-81df-c9001252a662", 00:54:58.563 "no_auto_visible": false 00:54:58.563 } 00:54:58.563 } 00:54:58.563 }, 00:54:58.563 { 00:54:58.563 "method": "nvmf_subsystem_add_listener", 00:54:58.563 "params": { 00:54:58.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:58.563 "listen_address": { 00:54:58.563 "trtype": "TCP", 00:54:58.563 "adrfam": "IPv4", 00:54:58.563 "traddr": "10.0.0.2", 00:54:58.563 "trsvcid": "4420" 00:54:58.563 }, 00:54:58.563 "secure_channel": false, 00:54:58.563 "sock_impl": "ssl" 00:54:58.563 } 00:54:58.563 } 00:54:58.563 ] 00:54:58.563 } 00:54:58.563 ] 00:54:58.563 }' 00:54:58.563 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:58.563 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2434455 00:54:58.563 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2434455 00:54:58.563 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2434455 ']' 00:54:58.563 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:58.563 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:58.563 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:58.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:58.563 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:58.563 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:58.563 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:54:58.563 [2024-12-09 11:05:59.670240] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:58.563 [2024-12-09 11:05:59.670316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:58.822 [2024-12-09 11:05:59.801154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:58.822 [2024-12-09 11:05:59.853050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:58.822 [2024-12-09 11:05:59.853099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:58.822 [2024-12-09 11:05:59.853116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:58.822 [2024-12-09 11:05:59.853130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:58.822 [2024-12-09 11:05:59.853141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:58.822 [2024-12-09 11:05:59.853823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:59.080 [2024-12-09 11:06:00.089226] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:59.080 [2024-12-09 11:06:00.121230] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:54:59.080 [2024-12-09 11:06:00.121474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:59.647 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:59.647 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:54:59.647 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:59.647 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:59.647 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:59.647 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:59.647 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2434614 00:54:59.647 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2434614 /var/tmp/bdevperf.sock 00:54:59.647 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2434614 ']' 00:54:59.647 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:54:59.647 "subsystems": [ 00:54:59.647 { 00:54:59.647 "subsystem": "keyring", 00:54:59.647 "config": [ 00:54:59.647 { 00:54:59.647 "method": "keyring_file_add_key", 00:54:59.647 "params": { 00:54:59.647 "name": "key0", 00:54:59.647 "path": "/tmp/tmp.rociq3ElhS" 00:54:59.647 } 00:54:59.647 } 00:54:59.647 ] 00:54:59.647 }, 00:54:59.647 { 00:54:59.647 "subsystem": "iobuf", 00:54:59.647 "config": [ 00:54:59.647 { 00:54:59.647 "method": "iobuf_set_options", 00:54:59.647 "params": { 00:54:59.647 "small_pool_count": 8192, 00:54:59.647 "large_pool_count": 1024, 00:54:59.647 "small_bufsize": 8192, 00:54:59.647 "large_bufsize": 135168, 00:54:59.647 "enable_numa": false 00:54:59.647 } 00:54:59.647 } 00:54:59.647 ] 00:54:59.647 }, 00:54:59.647 { 00:54:59.647 "subsystem": "sock", 00:54:59.647 "config": [ 00:54:59.647 { 00:54:59.647 "method": "sock_set_default_impl", 00:54:59.647 "params": { 00:54:59.647 "impl_name": "posix" 00:54:59.647 } 00:54:59.647 }, 00:54:59.647 { 00:54:59.647 "method": "sock_impl_set_options", 00:54:59.647 "params": { 00:54:59.647 "impl_name": "ssl", 00:54:59.647 "recv_buf_size": 4096, 00:54:59.647 "send_buf_size": 4096, 00:54:59.647 "enable_recv_pipe": true, 00:54:59.647 "enable_quickack": false, 00:54:59.647 "enable_placement_id": 0, 00:54:59.647 "enable_zerocopy_send_server": true, 00:54:59.647 "enable_zerocopy_send_client": false, 00:54:59.647 "zerocopy_threshold": 0, 00:54:59.647 "tls_version": 0, 00:54:59.647 "enable_ktls": false 00:54:59.647 } 00:54:59.647 }, 00:54:59.647 { 00:54:59.647 "method": "sock_impl_set_options", 00:54:59.647 "params": { 00:54:59.647 "impl_name": "posix", 00:54:59.647 "recv_buf_size": 2097152, 00:54:59.647 "send_buf_size": 2097152, 00:54:59.647 "enable_recv_pipe": true, 00:54:59.647 "enable_quickack": false, 00:54:59.647 "enable_placement_id": 0, 00:54:59.647 "enable_zerocopy_send_server": true, 00:54:59.647 "enable_zerocopy_send_client": false, 00:54:59.647 "zerocopy_threshold": 0, 00:54:59.647 "tls_version": 0, 00:54:59.647 "enable_ktls": false 00:54:59.647 } 00:54:59.647 } 00:54:59.647 ] 00:54:59.647 }, 00:54:59.647 { 00:54:59.647 "subsystem": "vmd", 00:54:59.647 "config": [] 00:54:59.647 }, 00:54:59.647 { 00:54:59.647 "subsystem": "accel", 00:54:59.647 "config": [ 00:54:59.647 { 00:54:59.647 "method": "accel_set_options", 00:54:59.647 "params": { 00:54:59.647 "small_cache_size": 128, 00:54:59.647 "large_cache_size": 16, 00:54:59.647 "task_count": 2048, 00:54:59.647 "sequence_count": 2048, 00:54:59.647 "buf_count": 2048 00:54:59.647 } 00:54:59.647 } 00:54:59.647 ] 00:54:59.647 }, 00:54:59.647 { 00:54:59.647 "subsystem": "bdev", 00:54:59.647 "config": [ 00:54:59.647 { 00:54:59.647 "method": "bdev_set_options", 00:54:59.647 "params": { 00:54:59.647 "bdev_io_pool_size": 65535, 00:54:59.647 "bdev_io_cache_size": 256, 00:54:59.647 "bdev_auto_examine": true, 00:54:59.647 "iobuf_small_cache_size": 128, 00:54:59.647 "iobuf_large_cache_size": 16 00:54:59.647 } 00:54:59.647 }, 00:54:59.647 { 00:54:59.648 "method": "bdev_raid_set_options", 00:54:59.648 "params": { 00:54:59.648 "process_window_size_kb": 1024, 00:54:59.648 "process_max_bandwidth_mb_sec": 0 00:54:59.648 } 00:54:59.648 }, 00:54:59.648 { 00:54:59.648 "method": "bdev_iscsi_set_options", 00:54:59.648 "params": { 00:54:59.648 "timeout_sec": 30 00:54:59.648 } 00:54:59.648 }, 00:54:59.648 { 00:54:59.648 "method": "bdev_nvme_set_options", 00:54:59.648 "params": { 00:54:59.648 "action_on_timeout": "none", 00:54:59.648 "timeout_us": 0, 00:54:59.648 "timeout_admin_us": 0, 00:54:59.648 "keep_alive_timeout_ms": 10000, 00:54:59.648 "arbitration_burst": 0, 00:54:59.648 "low_priority_weight": 0, 00:54:59.648 "medium_priority_weight": 0, 00:54:59.648 "high_priority_weight": 0, 00:54:59.648 "nvme_adminq_poll_period_us": 10000, 00:54:59.648 "nvme_ioq_poll_period_us": 0, 00:54:59.648 "io_queue_requests": 512, 00:54:59.648 "delay_cmd_submit": true, 00:54:59.648 "transport_retry_count": 4, 00:54:59.648 "bdev_retry_count": 3, 00:54:59.648 "transport_ack_timeout": 0, 00:54:59.648 "ctrlr_loss_timeout_sec": 0, 00:54:59.648 "reconnect_delay_sec": 0, 00:54:59.648 "fast_io_fail_timeout_sec": 0, 00:54:59.648 "disable_auto_failback": false, 00:54:59.648 "generate_uuids": false, 00:54:59.648 "transport_tos": 0, 00:54:59.648 "nvme_error_stat": false, 00:54:59.648 "rdma_srq_size": 0, 00:54:59.648 "io_path_stat": false, 00:54:59.648 "allow_accel_sequence": false, 00:54:59.648 "rdma_max_cq_size": 0, 00:54:59.648 "rdma_cm_event_timeout_ms": 0, 00:54:59.648 "dhchap_digests": [ 00:54:59.648 "sha256", 00:54:59.648 "sha384", 00:54:59.648 "sha512" 00:54:59.648 ], 00:54:59.648 "dhchap_dhgroups": [ 00:54:59.648 "null", 00:54:59.648 "ffdhe2048", 00:54:59.648 "ffdhe3072", 00:54:59.648 "ffdhe4096", 00:54:59.648 "ffdhe6144", 00:54:59.648 "ffdhe8192" 00:54:59.648 ] 00:54:59.648 } 00:54:59.648 }, 00:54:59.648 { 00:54:59.648 "method": "bdev_nvme_attach_controller", 00:54:59.648 "params": { 00:54:59.648 "name": "nvme0", 00:54:59.648 "trtype": "TCP", 00:54:59.648 "adrfam": "IPv4", 00:54:59.648 "traddr": "10.0.0.2", 00:54:59.648 "trsvcid": "4420", 00:54:59.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:59.648 "prchk_reftag": false, 00:54:59.648 "prchk_guard": false, 00:54:59.648 "ctrlr_loss_timeout_sec": 0, 00:54:59.648 "reconnect_delay_sec": 0, 00:54:59.648 "fast_io_fail_timeout_sec": 0, 00:54:59.648 "psk": "key0", 00:54:59.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:59.648 "hdgst": false, 00:54:59.648 "ddgst": false, 00:54:59.648 "multipath": "multipath" 00:54:59.648 } 00:54:59.648 }, 00:54:59.648 { 00:54:59.648 "method": "bdev_nvme_set_hotplug", 00:54:59.648 "params": { 00:54:59.648 "period_us": 100000, 00:54:59.648 "enable": false 00:54:59.648 } 00:54:59.648 }, 00:54:59.648 { 00:54:59.648 "method": "bdev_enable_histogram", 00:54:59.648 "params": { 00:54:59.648 "name": "nvme0n1", 00:54:59.648 "enable": true 00:54:59.648 } 00:54:59.648 }, 00:54:59.648 { 00:54:59.648 "method": "bdev_wait_for_examine" 00:54:59.648 } 00:54:59.648 ] 00:54:59.648 }, 00:54:59.648 { 00:54:59.648 "subsystem": "nbd", 00:54:59.648 "config": [] 00:54:59.648 } 00:54:59.648 ] 00:54:59.648 }' 00:54:59.648 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:59.648 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:59.648 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:54:59.648 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:59.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:59.648 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:59.648 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:54:59.648 [2024-12-09 11:06:00.668509] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:54:59.648 [2024-12-09 11:06:00.668578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434614 ] 00:54:59.648 [2024-12-09 11:06:00.750909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:59.648 [2024-12-09 11:06:00.797790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:59.906 [2024-12-09 11:06:00.958719] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:55:00.472 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:00.472 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:55:00.472 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:55:00.472 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:55:00.731 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:55:00.731 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:55:00.990 Running I/O for 1 seconds... 00:55:01.924 3784.00 IOPS, 14.78 MiB/s 00:55:01.924 Latency(us) 00:55:01.924 [2024-12-09T10:06:03.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:01.924 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:55:01.924 Verification LBA range: start 0x0 length 0x2000 00:55:01.924 nvme0n1 : 1.02 3830.98 14.96 0.00 0.00 33123.49 4957.94 40803.28 00:55:01.924 [2024-12-09T10:06:03.100Z] =================================================================================================================== 00:55:01.924 [2024-12-09T10:06:03.100Z] Total : 3830.98 14.96 0.00 0.00 33123.49 4957.94 40803.28 00:55:01.924 { 00:55:01.924 "results": [ 00:55:01.924 { 00:55:01.924 "job": "nvme0n1", 00:55:01.925 "core_mask": "0x2", 00:55:01.925 "workload": "verify", 00:55:01.925 "status": "finished", 00:55:01.925 "verify_range": { 00:55:01.925 "start": 0, 00:55:01.925 "length": 8192 00:55:01.925 }, 00:55:01.925 "queue_depth": 128, 00:55:01.925 "io_size": 4096, 00:55:01.925 "runtime": 1.021149, 00:55:01.925 "iops": 3830.978632892947, 00:55:01.925 "mibps": 14.964760284738075, 00:55:01.925 "io_failed": 0, 00:55:01.925 "io_timeout": 0, 00:55:01.925 "avg_latency_us": 33123.489243353775, 00:55:01.925 "min_latency_us": 4957.940869565217, 00:55:01.925 "max_latency_us": 40803.28347826087 00:55:01.925 } 00:55:01.925 ], 00:55:01.925 "core_count": 1 00:55:01.925 } 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:55:01.925 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:55:01.925 nvmf_trace.0 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2434614 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2434614 ']' 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2434614 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2434614 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2434614' 00:55:02.183 killing process with pid 2434614 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2434614 00:55:02.183 Received shutdown signal, test time was about 1.000000 seconds 00:55:02.183 00:55:02.183 Latency(us) 00:55:02.183 [2024-12-09T10:06:03.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:02.183 [2024-12-09T10:06:03.359Z] =================================================================================================================== 00:55:02.183 [2024-12-09T10:06:03.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:55:02.183 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2434614 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:02.441 rmmod nvme_tcp 00:55:02.441 rmmod nvme_fabrics 00:55:02.441 rmmod nvme_keyring 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2434455 ']' 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2434455 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2434455 ']' 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2434455 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2434455 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2434455' 00:55:02.441 killing process with pid 2434455 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2434455 00:55:02.441 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2434455 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:02.700 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:05.236 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:55:05.236 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.jwCSCfxwAu /tmp/tmp.MetAXVHpZn /tmp/tmp.rociq3ElhS 00:55:05.236 00:55:05.236 real 1m27.575s 00:55:05.236 user 2m12.462s 00:55:05.236 sys 0m35.736s 00:55:05.236 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:05.236 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:55:05.236 ************************************ 00:55:05.236 END TEST nvmf_tls 00:55:05.236 ************************************ 00:55:05.236 11:06:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:55:05.236 11:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:05.236 11:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:05.236 11:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:55:05.236 ************************************ 00:55:05.236 START TEST nvmf_fips 00:55:05.236 ************************************ 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:55:05.236 * Looking for test storage... 00:55:05.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:05.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:05.236 --rc genhtml_branch_coverage=1 00:55:05.236 --rc genhtml_function_coverage=1 00:55:05.236 --rc genhtml_legend=1 00:55:05.236 --rc geninfo_all_blocks=1 00:55:05.236 --rc geninfo_unexecuted_blocks=1 00:55:05.236 00:55:05.236 ' 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:05.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:05.236 --rc genhtml_branch_coverage=1 00:55:05.236 --rc genhtml_function_coverage=1 00:55:05.236 --rc genhtml_legend=1 00:55:05.236 --rc geninfo_all_blocks=1 00:55:05.236 --rc geninfo_unexecuted_blocks=1 00:55:05.236 00:55:05.236 ' 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:05.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:05.236 --rc genhtml_branch_coverage=1 00:55:05.236 --rc genhtml_function_coverage=1 00:55:05.236 --rc genhtml_legend=1 00:55:05.236 --rc geninfo_all_blocks=1 00:55:05.236 --rc geninfo_unexecuted_blocks=1 00:55:05.236 00:55:05.236 ' 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:05.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:05.236 --rc genhtml_branch_coverage=1 00:55:05.236 --rc genhtml_function_coverage=1 00:55:05.236 --rc genhtml_legend=1 00:55:05.236 --rc geninfo_all_blocks=1 00:55:05.236 --rc geninfo_unexecuted_blocks=1 00:55:05.236 00:55:05.236 ' 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:55:05.236 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:05.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:55:05.237 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:55:05.496 Error setting digest 00:55:05.496 40A2A0546E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:55:05.496 40A2A0546E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:55:05.496 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:55:12.074 Found 0000:af:00.0 (0x8086 - 0x159b) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:55:12.074 Found 0000:af:00.1 (0x8086 - 0x159b) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:55:12.074 Found net devices under 0000:af:00.0: cvl_0_0 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:55:12.074 Found net devices under 0000:af:00.1: cvl_0_1 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:55:12.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:12.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:55:12.074 00:55:12.074 --- 10.0.0.2 ping statistics --- 00:55:12.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:12.074 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:55:12.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:12.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:55:12.074 00:55:12.074 --- 10.0.0.1 ping statistics --- 00:55:12.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:12.074 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:12.074 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2438745 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2438745 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2438745 ']' 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:12.075 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:12.075 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:12.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:12.075 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:12.075 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:55:12.075 [2024-12-09 11:06:13.090018] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:55:12.075 [2024-12-09 11:06:13.090111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:12.075 [2024-12-09 11:06:13.192874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:12.075 [2024-12-09 11:06:13.237635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:12.075 [2024-12-09 11:06:13.237687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:12.075 [2024-12-09 11:06:13.237699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:12.075 [2024-12-09 11:06:13.237710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:12.075 [2024-12-09 11:06:13.237719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:12.075 [2024-12-09 11:06:13.238238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:12.333 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:12.333 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:55:12.333 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:12.333 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:12.333 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:55:12.333 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:12.333 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:55:12.333 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:55:12.333 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:55:12.333 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.hZ3 00:55:12.334 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:55:12.334 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.hZ3 00:55:12.334 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.hZ3 00:55:12.334 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.hZ3 00:55:12.334 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:55:12.592 [2024-12-09 11:06:13.662654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:12.592 [2024-12-09 11:06:13.678632] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:55:12.592 [2024-12-09 11:06:13.678846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:12.592 malloc0 00:55:12.592 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:55:12.592 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2438939 00:55:12.592 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:55:12.592 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2438939 /var/tmp/bdevperf.sock 00:55:12.592 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2438939 ']' 00:55:12.592 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:55:12.592 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:12.592 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:55:12.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:55:12.592 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:12.592 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:55:12.851 [2024-12-09 11:06:13.817330] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:55:12.851 [2024-12-09 11:06:13.817416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438939 ] 00:55:12.851 [2024-12-09 11:06:13.914739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:12.851 [2024-12-09 11:06:13.956513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:55:13.787 11:06:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:13.787 11:06:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:55:13.787 11:06:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.hZ3 00:55:14.046 11:06:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:55:14.305 [2024-12-09 11:06:15.255595] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:55:14.305 TLSTESTn1 00:55:14.305 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:55:14.305 Running I/O for 10 seconds... 00:55:16.616 4127.00 IOPS, 16.12 MiB/s [2024-12-09T10:06:18.726Z] 4481.00 IOPS, 17.50 MiB/s [2024-12-09T10:06:19.660Z] 4592.67 IOPS, 17.94 MiB/s [2024-12-09T10:06:20.598Z] 4618.00 IOPS, 18.04 MiB/s [2024-12-09T10:06:21.535Z] 4715.60 IOPS, 18.42 MiB/s [2024-12-09T10:06:22.916Z] 4717.50 IOPS, 18.43 MiB/s [2024-12-09T10:06:23.485Z] 4639.86 IOPS, 18.12 MiB/s [2024-12-09T10:06:24.864Z] 4614.25 IOPS, 18.02 MiB/s [2024-12-09T10:06:25.802Z] 4570.78 IOPS, 17.85 MiB/s [2024-12-09T10:06:25.802Z] 4531.70 IOPS, 17.70 MiB/s 00:55:24.626 Latency(us) 00:55:24.626 [2024-12-09T10:06:25.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:24.626 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:55:24.626 Verification LBA range: start 0x0 length 0x2000 00:55:24.626 TLSTESTn1 : 10.02 4535.16 17.72 0.00 0.00 28181.51 5328.36 44678.46 00:55:24.626 [2024-12-09T10:06:25.802Z] =================================================================================================================== 00:55:24.626 [2024-12-09T10:06:25.802Z] Total : 4535.16 17.72 0.00 0.00 28181.51 5328.36 44678.46 00:55:24.626 { 00:55:24.626 "results": [ 00:55:24.626 { 00:55:24.626 "job": "TLSTESTn1", 00:55:24.626 "core_mask": "0x4", 00:55:24.626 "workload": "verify", 00:55:24.626 "status": "finished", 00:55:24.626 "verify_range": { 00:55:24.626 "start": 0, 00:55:24.626 "length": 8192 00:55:24.626 }, 00:55:24.626 "queue_depth": 128, 00:55:24.626 "io_size": 4096, 00:55:24.626 "runtime": 10.020382, 00:55:24.626 "iops": 4535.156444135563, 00:55:24.626 "mibps": 17.715454859904543, 00:55:24.626 "io_failed": 0, 00:55:24.626 "io_timeout": 0, 00:55:24.626 "avg_latency_us": 28181.507358430634, 00:55:24.626 "min_latency_us": 5328.361739130435, 00:55:24.626 "max_latency_us": 44678.455652173914 00:55:24.626 } 00:55:24.626 ], 00:55:24.626 "core_count": 1 00:55:24.626 } 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:55:24.626 nvmf_trace.0 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2438939 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2438939 ']' 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2438939 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2438939 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2438939' 00:55:24.626 killing process with pid 2438939 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2438939 00:55:24.626 Received shutdown signal, test time was about 10.000000 seconds 00:55:24.626 00:55:24.626 Latency(us) 00:55:24.626 [2024-12-09T10:06:25.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:24.626 [2024-12-09T10:06:25.802Z] =================================================================================================================== 00:55:24.626 [2024-12-09T10:06:25.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:55:24.626 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2438939 00:55:24.886 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:55:24.886 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:24.886 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:55:24.886 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:24.886 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:55:24.886 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:24.886 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:24.886 rmmod nvme_tcp 00:55:24.886 rmmod nvme_fabrics 00:55:24.886 rmmod nvme_keyring 00:55:24.886 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:24.886 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:55:24.886 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:55:24.887 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2438745 ']' 00:55:24.887 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2438745 00:55:24.887 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2438745 ']' 00:55:24.887 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2438745 00:55:24.887 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:55:24.887 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:24.887 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2438745 00:55:24.887 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:55:24.887 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:55:24.887 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2438745' 00:55:24.887 killing process with pid 2438745 00:55:24.887 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2438745 00:55:24.887 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2438745 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:25.147 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.hZ3 00:55:27.689 00:55:27.689 real 0m22.350s 00:55:27.689 user 0m23.379s 00:55:27.689 sys 0m11.133s 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:55:27.689 ************************************ 00:55:27.689 END TEST nvmf_fips 00:55:27.689 ************************************ 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:55:27.689 ************************************ 00:55:27.689 START TEST nvmf_control_msg_list 00:55:27.689 ************************************ 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:55:27.689 * Looking for test storage... 00:55:27.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:55:27.689 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:27.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:27.690 --rc genhtml_branch_coverage=1 00:55:27.690 --rc genhtml_function_coverage=1 00:55:27.690 --rc genhtml_legend=1 00:55:27.690 --rc geninfo_all_blocks=1 00:55:27.690 --rc geninfo_unexecuted_blocks=1 00:55:27.690 00:55:27.690 ' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:27.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:27.690 --rc genhtml_branch_coverage=1 00:55:27.690 --rc genhtml_function_coverage=1 00:55:27.690 --rc genhtml_legend=1 00:55:27.690 --rc geninfo_all_blocks=1 00:55:27.690 --rc geninfo_unexecuted_blocks=1 00:55:27.690 00:55:27.690 ' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:27.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:27.690 --rc genhtml_branch_coverage=1 00:55:27.690 --rc genhtml_function_coverage=1 00:55:27.690 --rc genhtml_legend=1 00:55:27.690 --rc geninfo_all_blocks=1 00:55:27.690 --rc geninfo_unexecuted_blocks=1 00:55:27.690 00:55:27.690 ' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:27.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:27.690 --rc genhtml_branch_coverage=1 00:55:27.690 --rc genhtml_function_coverage=1 00:55:27.690 --rc genhtml_legend=1 00:55:27.690 --rc geninfo_all_blocks=1 00:55:27.690 --rc geninfo_unexecuted_blocks=1 00:55:27.690 00:55:27.690 ' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:27.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:55:27.690 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:55:27.691 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:55:27.691 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:55:34.272 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:34.273 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:34.273 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:34.273 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:34.273 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:55:34.273 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:55:34.273 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:55:34.273 Found 0000:af:00.0 (0x8086 - 0x159b) 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:55:34.273 Found 0000:af:00.1 (0x8086 - 0x159b) 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:55:34.273 Found net devices under 0000:af:00.0: cvl_0_0 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:55:34.273 Found net devices under 0000:af:00.1: cvl_0_1 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:34.273 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:55:34.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:34.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:55:34.274 00:55:34.274 --- 10.0.0.2 ping statistics --- 00:55:34.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:34.274 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:55:34.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:34.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:55:34.274 00:55:34.274 --- 10.0.0.1 ping statistics --- 00:55:34.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:34.274 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2443617 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2443617 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2443617 ']' 00:55:34.274 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:34.275 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:34.275 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:34.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:34.275 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:34.275 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:55:34.275 [2024-12-09 11:06:35.331219] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:55:34.275 [2024-12-09 11:06:35.331300] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:34.535 [2024-12-09 11:06:35.462804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:34.535 [2024-12-09 11:06:35.515115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:34.535 [2024-12-09 11:06:35.515159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:34.535 [2024-12-09 11:06:35.515175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:34.535 [2024-12-09 11:06:35.515189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:34.535 [2024-12-09 11:06:35.515201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:34.535 [2024-12-09 11:06:35.515808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:55:34.535 [2024-12-09 11:06:35.687493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:34.535 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:55:34.795 Malloc0 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:55:34.795 [2024-12-09 11:06:35.738554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2443695 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2443697 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2443698 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2443695 00:55:34.795 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:55:34.795 [2024-12-09 11:06:35.839569] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:55:34.795 [2024-12-09 11:06:35.839834] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:55:34.795 [2024-12-09 11:06:35.840172] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:55:35.731 Initializing NVMe Controllers 00:55:35.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:55:35.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:55:35.731 Initialization complete. Launching workers. 00:55:35.731 ======================================================== 00:55:35.731 Latency(us) 00:55:35.731 Device Information : IOPS MiB/s Average min max 00:55:35.731 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4555.00 17.79 219.11 180.33 821.87 00:55:35.731 ======================================================== 00:55:35.731 Total : 4555.00 17.79 219.11 180.33 821.87 00:55:35.731 00:55:35.990 Initializing NVMe Controllers 00:55:35.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:55:35.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:55:35.991 Initialization complete. Launching workers. 00:55:35.991 ======================================================== 00:55:35.991 Latency(us) 00:55:35.991 Device Information : IOPS MiB/s Average min max 00:55:35.991 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4592.00 17.94 217.34 161.71 517.98 00:55:35.991 ======================================================== 00:55:35.991 Total : 4592.00 17.94 217.34 161.71 517.98 00:55:35.991 00:55:35.991 Initializing NVMe Controllers 00:55:35.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:55:35.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:55:35.991 Initialization complete. Launching workers. 00:55:35.991 ======================================================== 00:55:35.991 Latency(us) 00:55:35.991 Device Information : IOPS MiB/s Average min max 00:55:35.991 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40972.76 40788.63 41873.87 00:55:35.991 ======================================================== 00:55:35.991 Total : 25.00 0.10 40972.76 40788.63 41873.87 00:55:35.991 00:55:35.991 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2443697 00:55:35.991 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2443698 00:55:35.991 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:55:35.991 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:55:35.991 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:35.991 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:55:35.991 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:35.991 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:55:35.991 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:35.991 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:35.991 rmmod nvme_tcp 00:55:35.991 rmmod nvme_fabrics 00:55:36.265 rmmod nvme_keyring 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2443617 ']' 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2443617 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2443617 ']' 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2443617 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2443617 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2443617' 00:55:36.265 killing process with pid 2443617 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2443617 00:55:36.265 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2443617 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:36.525 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:55:39.066 00:55:39.066 real 0m11.155s 00:55:39.066 user 0m7.279s 00:55:39.066 sys 0m6.186s 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:55:39.066 ************************************ 00:55:39.066 END TEST nvmf_control_msg_list 00:55:39.066 ************************************ 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:55:39.066 ************************************ 00:55:39.066 START TEST nvmf_wait_for_buf 00:55:39.066 ************************************ 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:55:39.066 * Looking for test storage... 00:55:39.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:39.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:39.066 --rc genhtml_branch_coverage=1 00:55:39.066 --rc genhtml_function_coverage=1 00:55:39.066 --rc genhtml_legend=1 00:55:39.066 --rc geninfo_all_blocks=1 00:55:39.066 --rc geninfo_unexecuted_blocks=1 00:55:39.066 00:55:39.066 ' 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:39.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:39.066 --rc genhtml_branch_coverage=1 00:55:39.066 --rc genhtml_function_coverage=1 00:55:39.066 --rc genhtml_legend=1 00:55:39.066 --rc geninfo_all_blocks=1 00:55:39.066 --rc geninfo_unexecuted_blocks=1 00:55:39.066 00:55:39.066 ' 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:39.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:39.066 --rc genhtml_branch_coverage=1 00:55:39.066 --rc genhtml_function_coverage=1 00:55:39.066 --rc genhtml_legend=1 00:55:39.066 --rc geninfo_all_blocks=1 00:55:39.066 --rc geninfo_unexecuted_blocks=1 00:55:39.066 00:55:39.066 ' 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:39.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:39.066 --rc genhtml_branch_coverage=1 00:55:39.066 --rc genhtml_function_coverage=1 00:55:39.066 --rc genhtml_legend=1 00:55:39.066 --rc geninfo_all_blocks=1 00:55:39.066 --rc geninfo_unexecuted_blocks=1 00:55:39.066 00:55:39.066 ' 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:39.066 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:39.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:55:39.067 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:55:45.671 Found 0000:af:00.0 (0x8086 - 0x159b) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:55:45.671 Found 0000:af:00.1 (0x8086 - 0x159b) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:55:45.671 Found net devices under 0000:af:00.0: cvl_0_0 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:45.671 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:55:45.672 Found net devices under 0000:af:00.1: cvl_0_1 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:55:45.672 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:55:45.932 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:55:45.932 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:55:45.932 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:55:45.932 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:55:45.932 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:55:45.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:45.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:55:45.932 00:55:45.932 --- 10.0.0.2 ping statistics --- 00:55:45.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:45.932 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:55:45.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:45.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:55:45.932 00:55:45.932 --- 10.0.0.1 ping statistics --- 00:55:45.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:45.932 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2447207 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2447207 00:55:45.932 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2447207 ']' 00:55:45.933 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:45.933 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:45.933 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:45.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:45.933 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:45.933 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:46.192 [2024-12-09 11:06:47.114311] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:55:46.192 [2024-12-09 11:06:47.114367] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:46.192 [2024-12-09 11:06:47.230028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:46.192 [2024-12-09 11:06:47.285163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:46.192 [2024-12-09 11:06:47.285207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:46.192 [2024-12-09 11:06:47.285223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:46.192 [2024-12-09 11:06:47.285237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:46.192 [2024-12-09 11:06:47.285249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:46.192 [2024-12-09 11:06:47.285860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:47.129 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:47.129 Malloc0 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:47.129 [2024-12-09 11:06:48.105594] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:47.129 [2024-12-09 11:06:48.133808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:47.129 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:55:47.129 [2024-12-09 11:06:48.233744] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:55:48.509 Initializing NVMe Controllers 00:55:48.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:55:48.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:55:48.509 Initialization complete. Launching workers. 00:55:48.509 ======================================================== 00:55:48.509 Latency(us) 00:55:48.509 Device Information : IOPS MiB/s Average min max 00:55:48.510 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 119.00 14.88 34940.84 7020.59 71831.42 00:55:48.510 ======================================================== 00:55:48.510 Total : 119.00 14.88 34940.84 7020.59 71831.42 00:55:48.510 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1878 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1878 -eq 0 ]] 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:48.769 rmmod nvme_tcp 00:55:48.769 rmmod nvme_fabrics 00:55:48.769 rmmod nvme_keyring 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2447207 ']' 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2447207 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2447207 ']' 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2447207 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:48.769 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2447207 00:55:49.028 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:49.028 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:49.028 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2447207' 00:55:49.028 killing process with pid 2447207 00:55:49.028 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2447207 00:55:49.028 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2447207 00:55:49.028 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:49.028 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:49.028 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:49.028 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:55:49.028 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:55:49.028 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:49.028 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:55:49.288 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:49.288 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:55:49.288 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:49.288 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:49.288 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:51.195 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:55:51.195 00:55:51.195 real 0m12.581s 00:55:51.195 user 0m5.634s 00:55:51.195 sys 0m5.669s 00:55:51.195 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:51.195 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:55:51.195 ************************************ 00:55:51.195 END TEST nvmf_wait_for_buf 00:55:51.195 ************************************ 00:55:51.195 11:06:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # '[' 0 -eq 1 ']' 00:55:51.195 11:06:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # [[ phy == phy ]] 00:55:51.195 11:06:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # '[' tcp = tcp ']' 00:55:51.195 11:06:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # gather_supported_nvmf_pci_devs 00:55:51.195 11:06:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:55:51.195 11:06:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:55:59.323 Found 0000:af:00.0 (0x8086 - 0x159b) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:55:59.323 Found 0000:af:00.1 (0x8086 - 0x159b) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:55:59.323 Found net devices under 0000:af:00.0: cvl_0_0 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:55:59.323 Found net devices under 0000:af:00.1: cvl_0_1 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@59 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # (( 2 > 0 )) 00:55:59.323 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@61 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:55:59.324 ************************************ 00:55:59.324 START TEST nvmf_perf_adq 00:55:59.324 ************************************ 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:55:59.324 * Looking for test storage... 00:55:59.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:59.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:59.324 --rc genhtml_branch_coverage=1 00:55:59.324 --rc genhtml_function_coverage=1 00:55:59.324 --rc genhtml_legend=1 00:55:59.324 --rc geninfo_all_blocks=1 00:55:59.324 --rc geninfo_unexecuted_blocks=1 00:55:59.324 00:55:59.324 ' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:59.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:59.324 --rc genhtml_branch_coverage=1 00:55:59.324 --rc genhtml_function_coverage=1 00:55:59.324 --rc genhtml_legend=1 00:55:59.324 --rc geninfo_all_blocks=1 00:55:59.324 --rc geninfo_unexecuted_blocks=1 00:55:59.324 00:55:59.324 ' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:59.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:59.324 --rc genhtml_branch_coverage=1 00:55:59.324 --rc genhtml_function_coverage=1 00:55:59.324 --rc genhtml_legend=1 00:55:59.324 --rc geninfo_all_blocks=1 00:55:59.324 --rc geninfo_unexecuted_blocks=1 00:55:59.324 00:55:59.324 ' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:59.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:59.324 --rc genhtml_branch_coverage=1 00:55:59.324 --rc genhtml_function_coverage=1 00:55:59.324 --rc genhtml_legend=1 00:55:59.324 --rc geninfo_all_blocks=1 00:55:59.324 --rc geninfo_unexecuted_blocks=1 00:55:59.324 00:55:59.324 ' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:59.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:59.324 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:59.325 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:59.325 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:55:59.325 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:55:59.325 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:56:05.914 Found 0000:af:00.0 (0x8086 - 0x159b) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:56:05.914 Found 0000:af:00.1 (0x8086 - 0x159b) 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:05.914 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:56:05.915 Found net devices under 0000:af:00.0: cvl_0_0 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:56:05.915 Found net devices under 0000:af:00.1: cvl_0_1 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:56:05.915 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:56:06.484 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:56:10.681 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:56:16.001 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:56:16.002 Found 0000:af:00.0 (0x8086 - 0x159b) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:56:16.002 Found 0000:af:00.1 (0x8086 - 0x159b) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:56:16.002 Found net devices under 0000:af:00.0: cvl_0_0 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:56:16.002 Found net devices under 0000:af:00.1: cvl_0_1 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:56:16.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:16.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:56:16.002 00:56:16.002 --- 10.0.0.2 ping statistics --- 00:56:16.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:16.002 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:56:16.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:16.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:56:16.002 00:56:16.002 --- 10.0.0.1 ping statistics --- 00:56:16.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:16.002 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:16.002 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2455101 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2455101 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2455101 ']' 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:16.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.003 [2024-12-09 11:07:16.755049] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:56:16.003 [2024-12-09 11:07:16.755130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:16.003 [2024-12-09 11:07:16.890990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:56:16.003 [2024-12-09 11:07:16.946952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:16.003 [2024-12-09 11:07:16.947003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:16.003 [2024-12-09 11:07:16.947021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:16.003 [2024-12-09 11:07:16.947036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:16.003 [2024-12-09 11:07:16.947047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:16.003 [2024-12-09 11:07:16.948781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:16.003 [2024-12-09 11:07:16.948806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:56:16.003 [2024-12-09 11:07:16.948889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:56:16.003 [2024-12-09 11:07:16.948893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:16.003 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.003 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.263 [2024-12-09 11:07:17.193334] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.263 Malloc1 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:16.263 [2024-12-09 11:07:17.261388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2455291 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:56:16.263 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:56:18.170 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:56:18.170 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:18.170 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:18.170 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:18.170 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:56:18.170 "tick_rate": 2300000000, 00:56:18.170 "poll_groups": [ 00:56:18.170 { 00:56:18.170 "name": "nvmf_tgt_poll_group_000", 00:56:18.170 "admin_qpairs": 1, 00:56:18.170 "io_qpairs": 1, 00:56:18.170 "current_admin_qpairs": 1, 00:56:18.170 "current_io_qpairs": 1, 00:56:18.170 "pending_bdev_io": 0, 00:56:18.170 "completed_nvme_io": 19325, 00:56:18.170 "transports": [ 00:56:18.170 { 00:56:18.170 "trtype": "TCP" 00:56:18.170 } 00:56:18.170 ] 00:56:18.170 }, 00:56:18.170 { 00:56:18.170 "name": "nvmf_tgt_poll_group_001", 00:56:18.170 "admin_qpairs": 0, 00:56:18.170 "io_qpairs": 1, 00:56:18.170 "current_admin_qpairs": 0, 00:56:18.170 "current_io_qpairs": 1, 00:56:18.170 "pending_bdev_io": 0, 00:56:18.170 "completed_nvme_io": 19563, 00:56:18.170 "transports": [ 00:56:18.170 { 00:56:18.170 "trtype": "TCP" 00:56:18.170 } 00:56:18.170 ] 00:56:18.170 }, 00:56:18.170 { 00:56:18.170 "name": "nvmf_tgt_poll_group_002", 00:56:18.170 "admin_qpairs": 0, 00:56:18.170 "io_qpairs": 1, 00:56:18.170 "current_admin_qpairs": 0, 00:56:18.170 "current_io_qpairs": 1, 00:56:18.170 "pending_bdev_io": 0, 00:56:18.170 "completed_nvme_io": 18871, 00:56:18.170 "transports": [ 00:56:18.170 { 00:56:18.170 "trtype": "TCP" 00:56:18.170 } 00:56:18.170 ] 00:56:18.170 }, 00:56:18.170 { 00:56:18.170 "name": "nvmf_tgt_poll_group_003", 00:56:18.170 "admin_qpairs": 0, 00:56:18.170 "io_qpairs": 1, 00:56:18.170 "current_admin_qpairs": 0, 00:56:18.170 "current_io_qpairs": 1, 00:56:18.170 "pending_bdev_io": 0, 00:56:18.170 "completed_nvme_io": 15540, 00:56:18.170 "transports": [ 00:56:18.170 { 00:56:18.170 "trtype": "TCP" 00:56:18.170 } 00:56:18.170 ] 00:56:18.170 } 00:56:18.170 ] 00:56:18.170 }' 00:56:18.170 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:56:18.170 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:56:18.429 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:56:18.429 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:56:18.429 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2455291 00:56:26.557 Initializing NVMe Controllers 00:56:26.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:56:26.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:56:26.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:56:26.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:56:26.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:56:26.557 Initialization complete. Launching workers. 00:56:26.557 ======================================================== 00:56:26.557 Latency(us) 00:56:26.557 Device Information : IOPS MiB/s Average min max 00:56:26.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8204.40 32.05 7802.59 2617.76 12962.07 00:56:26.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10421.59 40.71 6142.17 1683.48 9916.63 00:56:26.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10177.79 39.76 6288.33 2347.52 9784.63 00:56:26.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10165.29 39.71 6295.65 2769.22 13094.46 00:56:26.557 ======================================================== 00:56:26.558 Total : 38969.08 152.22 6569.96 1683.48 13094.46 00:56:26.558 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:56:26.558 rmmod nvme_tcp 00:56:26.558 rmmod nvme_fabrics 00:56:26.558 rmmod nvme_keyring 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2455101 ']' 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2455101 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2455101 ']' 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2455101 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2455101 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2455101' 00:56:26.558 killing process with pid 2455101 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2455101 00:56:26.558 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2455101 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:26.818 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:29.354 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:56:29.354 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:56:29.354 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:56:29.354 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:56:29.923 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:56:31.843 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:56:37.130 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:56:37.131 Found 0000:af:00.0 (0x8086 - 0x159b) 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:56:37.131 Found 0000:af:00.1 (0x8086 - 0x159b) 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:56:37.131 Found net devices under 0000:af:00.0: cvl_0_0 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:56:37.131 Found net devices under 0000:af:00.1: cvl_0_1 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:56:37.131 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:56:37.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:37.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:56:37.131 00:56:37.131 --- 10.0.0.2 ping statistics --- 00:56:37.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:37.131 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:56:37.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:37.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:56:37.131 00:56:37.131 --- 10.0.0.1 ping statistics --- 00:56:37.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:37.131 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:56:37.131 net.core.busy_poll = 1 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:56:37.131 net.core.busy_read = 1 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:56:37.131 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2458331 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2458331 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2458331 ']' 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:37.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:37.392 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:37.652 [2024-12-09 11:07:38.626462] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:56:37.652 [2024-12-09 11:07:38.626547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:37.652 [2024-12-09 11:07:38.762114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:56:37.652 [2024-12-09 11:07:38.815854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:37.652 [2024-12-09 11:07:38.815907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:37.652 [2024-12-09 11:07:38.815923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:37.652 [2024-12-09 11:07:38.815937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:37.652 [2024-12-09 11:07:38.815949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:37.652 [2024-12-09 11:07:38.817883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:37.652 [2024-12-09 11:07:38.817976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:56:37.652 [2024-12-09 11:07:38.818066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:56:37.652 [2024-12-09 11:07:38.818071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:37.912 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:37.912 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:37.912 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:56:37.912 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:37.912 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:37.912 [2024-12-09 11:07:39.045271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:37.912 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:37.912 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:56:37.912 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:37.913 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:37.913 Malloc1 00:56:37.913 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:37.913 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:56:37.913 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:37.913 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:38.172 [2024-12-09 11:07:39.109178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2458469 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:56:38.172 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:56:40.083 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:56:40.083 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:40.083 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:40.083 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:40.083 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:56:40.083 "tick_rate": 2300000000, 00:56:40.083 "poll_groups": [ 00:56:40.083 { 00:56:40.084 "name": "nvmf_tgt_poll_group_000", 00:56:40.084 "admin_qpairs": 1, 00:56:40.084 "io_qpairs": 2, 00:56:40.084 "current_admin_qpairs": 1, 00:56:40.084 "current_io_qpairs": 2, 00:56:40.084 "pending_bdev_io": 0, 00:56:40.084 "completed_nvme_io": 27896, 00:56:40.084 "transports": [ 00:56:40.084 { 00:56:40.084 "trtype": "TCP" 00:56:40.084 } 00:56:40.084 ] 00:56:40.084 }, 00:56:40.084 { 00:56:40.084 "name": "nvmf_tgt_poll_group_001", 00:56:40.084 "admin_qpairs": 0, 00:56:40.084 "io_qpairs": 2, 00:56:40.084 "current_admin_qpairs": 0, 00:56:40.084 "current_io_qpairs": 2, 00:56:40.084 "pending_bdev_io": 0, 00:56:40.084 "completed_nvme_io": 29526, 00:56:40.084 "transports": [ 00:56:40.084 { 00:56:40.084 "trtype": "TCP" 00:56:40.084 } 00:56:40.084 ] 00:56:40.084 }, 00:56:40.084 { 00:56:40.084 "name": "nvmf_tgt_poll_group_002", 00:56:40.084 "admin_qpairs": 0, 00:56:40.084 "io_qpairs": 0, 00:56:40.084 "current_admin_qpairs": 0, 00:56:40.084 "current_io_qpairs": 0, 00:56:40.084 "pending_bdev_io": 0, 00:56:40.084 "completed_nvme_io": 0, 00:56:40.084 "transports": [ 00:56:40.084 { 00:56:40.084 "trtype": "TCP" 00:56:40.084 } 00:56:40.084 ] 00:56:40.084 }, 00:56:40.084 { 00:56:40.084 "name": "nvmf_tgt_poll_group_003", 00:56:40.084 "admin_qpairs": 0, 00:56:40.084 "io_qpairs": 0, 00:56:40.084 "current_admin_qpairs": 0, 00:56:40.084 "current_io_qpairs": 0, 00:56:40.084 "pending_bdev_io": 0, 00:56:40.084 "completed_nvme_io": 0, 00:56:40.084 "transports": [ 00:56:40.084 { 00:56:40.084 "trtype": "TCP" 00:56:40.084 } 00:56:40.084 ] 00:56:40.084 } 00:56:40.084 ] 00:56:40.084 }' 00:56:40.084 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:56:40.084 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:56:40.084 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:56:40.084 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:56:40.084 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2458469 00:56:48.210 Initializing NVMe Controllers 00:56:48.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:56:48.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:56:48.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:56:48.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:56:48.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:56:48.210 Initialization complete. Launching workers. 00:56:48.210 ======================================================== 00:56:48.210 Latency(us) 00:56:48.210 Device Information : IOPS MiB/s Average min max 00:56:48.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7779.07 30.39 8226.31 1131.49 54087.86 00:56:48.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7252.07 28.33 8823.40 1329.94 53068.29 00:56:48.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7477.17 29.21 8560.38 1355.21 53627.27 00:56:48.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7327.77 28.62 8734.58 1349.71 52598.14 00:56:48.210 ======================================================== 00:56:48.210 Total : 29836.08 116.55 8579.99 1131.49 54087.86 00:56:48.210 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:56:48.210 rmmod nvme_tcp 00:56:48.210 rmmod nvme_fabrics 00:56:48.210 rmmod nvme_keyring 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2458331 ']' 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2458331 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2458331 ']' 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2458331 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:48.210 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458331 00:56:48.470 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:48.470 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:48.470 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458331' 00:56:48.470 killing process with pid 2458331 00:56:48.470 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2458331 00:56:48.470 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2458331 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:48.730 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:50.637 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:56:50.637 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:56:50.637 00:56:50.637 real 0m52.625s 00:56:50.637 user 2m42.995s 00:56:50.637 sys 0m16.100s 00:56:50.637 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:50.637 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:56:50.637 ************************************ 00:56:50.637 END TEST nvmf_perf_adq 00:56:50.637 ************************************ 00:56:50.896 11:07:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:56:50.896 11:07:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:50.896 11:07:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:50.896 11:07:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:56:50.896 ************************************ 00:56:50.896 START TEST nvmf_shutdown 00:56:50.896 ************************************ 00:56:50.896 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:56:50.896 * Looking for test storage... 00:56:50.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:56:50.896 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:56:50.896 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:56:50.896 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:51.156 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:56:51.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:51.157 --rc genhtml_branch_coverage=1 00:56:51.157 --rc genhtml_function_coverage=1 00:56:51.157 --rc genhtml_legend=1 00:56:51.157 --rc geninfo_all_blocks=1 00:56:51.157 --rc geninfo_unexecuted_blocks=1 00:56:51.157 00:56:51.157 ' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:56:51.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:51.157 --rc genhtml_branch_coverage=1 00:56:51.157 --rc genhtml_function_coverage=1 00:56:51.157 --rc genhtml_legend=1 00:56:51.157 --rc geninfo_all_blocks=1 00:56:51.157 --rc geninfo_unexecuted_blocks=1 00:56:51.157 00:56:51.157 ' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:56:51.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:51.157 --rc genhtml_branch_coverage=1 00:56:51.157 --rc genhtml_function_coverage=1 00:56:51.157 --rc genhtml_legend=1 00:56:51.157 --rc geninfo_all_blocks=1 00:56:51.157 --rc geninfo_unexecuted_blocks=1 00:56:51.157 00:56:51.157 ' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:56:51.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:51.157 --rc genhtml_branch_coverage=1 00:56:51.157 --rc genhtml_function_coverage=1 00:56:51.157 --rc genhtml_legend=1 00:56:51.157 --rc geninfo_all_blocks=1 00:56:51.157 --rc geninfo_unexecuted_blocks=1 00:56:51.157 00:56:51.157 ' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:56:51.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:56:51.157 ************************************ 00:56:51.157 START TEST nvmf_shutdown_tc1 00:56:51.157 ************************************ 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:56:51.157 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:56:59.302 Found 0000:af:00.0 (0x8086 - 0x159b) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:56:59.302 Found 0000:af:00.1 (0x8086 - 0x159b) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:56:59.302 Found net devices under 0000:af:00.0: cvl_0_0 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:59.302 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:56:59.303 Found net devices under 0000:af:00.1: cvl_0_1 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:56:59.303 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:56:59.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:59.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:56:59.303 00:56:59.303 --- 10.0.0.2 ping statistics --- 00:56:59.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:59.303 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:56:59.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:59.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:56:59.303 00:56:59.303 --- 10.0.0.1 ping statistics --- 00:56:59.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:59.303 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2463067 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2463067 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2463067 ']' 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:59.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:56:59.303 [2024-12-09 11:07:59.303979] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:56:59.303 [2024-12-09 11:07:59.304052] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:59.303 [2024-12-09 11:07:59.407984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:56:59.303 [2024-12-09 11:07:59.452484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:59.303 [2024-12-09 11:07:59.452523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:59.303 [2024-12-09 11:07:59.452534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:59.303 [2024-12-09 11:07:59.452544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:59.303 [2024-12-09 11:07:59.452553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:59.303 [2024-12-09 11:07:59.454021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:56:59.303 [2024-12-09 11:07:59.454114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:56:59.303 [2024-12-09 11:07:59.454203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:56:59.303 [2024-12-09 11:07:59.454206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:56:59.303 [2024-12-09 11:07:59.617464] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:56:59.303 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:59.304 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:56:59.304 Malloc1 00:56:59.304 [2024-12-09 11:07:59.745550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:59.304 Malloc2 00:56:59.304 Malloc3 00:56:59.304 Malloc4 00:56:59.304 Malloc5 00:56:59.304 Malloc6 00:56:59.304 Malloc7 00:56:59.304 Malloc8 00:56:59.304 Malloc9 00:56:59.304 Malloc10 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2463210 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2463210 /var/tmp/bdevperf.sock 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2463210 ']' 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:56:59.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:59.304 { 00:56:59.304 "params": { 00:56:59.304 "name": "Nvme$subsystem", 00:56:59.304 "trtype": "$TEST_TRANSPORT", 00:56:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:59.304 "adrfam": "ipv4", 00:56:59.304 "trsvcid": "$NVMF_PORT", 00:56:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:59.304 "hdgst": ${hdgst:-false}, 00:56:59.304 "ddgst": ${ddgst:-false} 00:56:59.304 }, 00:56:59.304 "method": "bdev_nvme_attach_controller" 00:56:59.304 } 00:56:59.304 EOF 00:56:59.304 )") 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:59.304 { 00:56:59.304 "params": { 00:56:59.304 "name": "Nvme$subsystem", 00:56:59.304 "trtype": "$TEST_TRANSPORT", 00:56:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:59.304 "adrfam": "ipv4", 00:56:59.304 "trsvcid": "$NVMF_PORT", 00:56:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:59.304 "hdgst": ${hdgst:-false}, 00:56:59.304 "ddgst": ${ddgst:-false} 00:56:59.304 }, 00:56:59.304 "method": "bdev_nvme_attach_controller" 00:56:59.304 } 00:56:59.304 EOF 00:56:59.304 )") 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:59.304 { 00:56:59.304 "params": { 00:56:59.304 "name": "Nvme$subsystem", 00:56:59.304 "trtype": "$TEST_TRANSPORT", 00:56:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:59.304 "adrfam": "ipv4", 00:56:59.304 "trsvcid": "$NVMF_PORT", 00:56:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:59.304 "hdgst": ${hdgst:-false}, 00:56:59.304 "ddgst": ${ddgst:-false} 00:56:59.304 }, 00:56:59.304 "method": "bdev_nvme_attach_controller" 00:56:59.304 } 00:56:59.304 EOF 00:56:59.304 )") 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:59.304 { 00:56:59.304 "params": { 00:56:59.304 "name": "Nvme$subsystem", 00:56:59.304 "trtype": "$TEST_TRANSPORT", 00:56:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:59.304 "adrfam": "ipv4", 00:56:59.304 "trsvcid": "$NVMF_PORT", 00:56:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:59.304 "hdgst": ${hdgst:-false}, 00:56:59.304 "ddgst": ${ddgst:-false} 00:56:59.304 }, 00:56:59.304 "method": "bdev_nvme_attach_controller" 00:56:59.304 } 00:56:59.304 EOF 00:56:59.304 )") 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:59.304 { 00:56:59.304 "params": { 00:56:59.304 "name": "Nvme$subsystem", 00:56:59.304 "trtype": "$TEST_TRANSPORT", 00:56:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:59.304 "adrfam": "ipv4", 00:56:59.304 "trsvcid": "$NVMF_PORT", 00:56:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:59.304 "hdgst": ${hdgst:-false}, 00:56:59.304 "ddgst": ${ddgst:-false} 00:56:59.304 }, 00:56:59.304 "method": "bdev_nvme_attach_controller" 00:56:59.304 } 00:56:59.304 EOF 00:56:59.304 )") 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:59.304 { 00:56:59.304 "params": { 00:56:59.304 "name": "Nvme$subsystem", 00:56:59.304 "trtype": "$TEST_TRANSPORT", 00:56:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:59.304 "adrfam": "ipv4", 00:56:59.304 "trsvcid": "$NVMF_PORT", 00:56:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:59.304 "hdgst": ${hdgst:-false}, 00:56:59.304 "ddgst": ${ddgst:-false} 00:56:59.304 }, 00:56:59.304 "method": "bdev_nvme_attach_controller" 00:56:59.304 } 00:56:59.304 EOF 00:56:59.304 )") 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:56:59.304 [2024-12-09 11:08:00.258693] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:56:59.304 [2024-12-09 11:08:00.258774] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:59.304 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:59.304 { 00:56:59.304 "params": { 00:56:59.304 "name": "Nvme$subsystem", 00:56:59.304 "trtype": "$TEST_TRANSPORT", 00:56:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:59.304 "adrfam": "ipv4", 00:56:59.304 "trsvcid": "$NVMF_PORT", 00:56:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:59.304 "hdgst": ${hdgst:-false}, 00:56:59.304 "ddgst": ${ddgst:-false} 00:56:59.304 }, 00:56:59.304 "method": "bdev_nvme_attach_controller" 00:56:59.305 } 00:56:59.305 EOF 00:56:59.305 )") 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:59.305 { 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme$subsystem", 00:56:59.305 "trtype": "$TEST_TRANSPORT", 00:56:59.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "$NVMF_PORT", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:59.305 "hdgst": ${hdgst:-false}, 00:56:59.305 "ddgst": ${ddgst:-false} 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 } 00:56:59.305 EOF 00:56:59.305 )") 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:59.305 { 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme$subsystem", 00:56:59.305 "trtype": "$TEST_TRANSPORT", 00:56:59.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "$NVMF_PORT", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:59.305 "hdgst": ${hdgst:-false}, 00:56:59.305 "ddgst": ${ddgst:-false} 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 } 00:56:59.305 EOF 00:56:59.305 )") 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:59.305 { 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme$subsystem", 00:56:59.305 "trtype": "$TEST_TRANSPORT", 00:56:59.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "$NVMF_PORT", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:59.305 "hdgst": ${hdgst:-false}, 00:56:59.305 "ddgst": ${ddgst:-false} 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 } 00:56:59.305 EOF 00:56:59.305 )") 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:56:59.305 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme1", 00:56:59.305 "trtype": "tcp", 00:56:59.305 "traddr": "10.0.0.2", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "4420", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:56:59.305 "hdgst": false, 00:56:59.305 "ddgst": false 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 },{ 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme2", 00:56:59.305 "trtype": "tcp", 00:56:59.305 "traddr": "10.0.0.2", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "4420", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:56:59.305 "hdgst": false, 00:56:59.305 "ddgst": false 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 },{ 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme3", 00:56:59.305 "trtype": "tcp", 00:56:59.305 "traddr": "10.0.0.2", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "4420", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:56:59.305 "hdgst": false, 00:56:59.305 "ddgst": false 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 },{ 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme4", 00:56:59.305 "trtype": "tcp", 00:56:59.305 "traddr": "10.0.0.2", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "4420", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:56:59.305 "hdgst": false, 00:56:59.305 "ddgst": false 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 },{ 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme5", 00:56:59.305 "trtype": "tcp", 00:56:59.305 "traddr": "10.0.0.2", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "4420", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:56:59.305 "hdgst": false, 00:56:59.305 "ddgst": false 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 },{ 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme6", 00:56:59.305 "trtype": "tcp", 00:56:59.305 "traddr": "10.0.0.2", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "4420", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:56:59.305 "hdgst": false, 00:56:59.305 "ddgst": false 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 },{ 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme7", 00:56:59.305 "trtype": "tcp", 00:56:59.305 "traddr": "10.0.0.2", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "4420", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:56:59.305 "hdgst": false, 00:56:59.305 "ddgst": false 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 },{ 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme8", 00:56:59.305 "trtype": "tcp", 00:56:59.305 "traddr": "10.0.0.2", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "4420", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:56:59.305 "hdgst": false, 00:56:59.305 "ddgst": false 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 },{ 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme9", 00:56:59.305 "trtype": "tcp", 00:56:59.305 "traddr": "10.0.0.2", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "4420", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:56:59.305 "hdgst": false, 00:56:59.305 "ddgst": false 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 },{ 00:56:59.305 "params": { 00:56:59.305 "name": "Nvme10", 00:56:59.305 "trtype": "tcp", 00:56:59.305 "traddr": "10.0.0.2", 00:56:59.305 "adrfam": "ipv4", 00:56:59.305 "trsvcid": "4420", 00:56:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:56:59.305 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:56:59.305 "hdgst": false, 00:56:59.305 "ddgst": false 00:56:59.305 }, 00:56:59.305 "method": "bdev_nvme_attach_controller" 00:56:59.305 }' 00:56:59.305 [2024-12-09 11:08:00.395744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:59.305 [2024-12-09 11:08:00.447249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:01.213 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:01.213 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:57:01.213 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:57:01.213 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:01.213 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:57:01.213 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:01.213 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2463210 00:57:01.213 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:57:01.213 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:57:02.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2463210 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:57:02.152 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2463067 00:57:02.153 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:57:02.153 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:57:02.153 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:57:02.153 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:57:02.153 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:02.153 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:02.153 { 00:57:02.153 "params": { 00:57:02.153 "name": "Nvme$subsystem", 00:57:02.153 "trtype": "$TEST_TRANSPORT", 00:57:02.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:02.153 "adrfam": "ipv4", 00:57:02.153 "trsvcid": "$NVMF_PORT", 00:57:02.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:02.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:02.153 "hdgst": ${hdgst:-false}, 00:57:02.153 "ddgst": ${ddgst:-false} 00:57:02.153 }, 00:57:02.153 "method": "bdev_nvme_attach_controller" 00:57:02.153 } 00:57:02.153 EOF 00:57:02.153 )") 00:57:02.153 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:57:02.153 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:02.413 { 00:57:02.413 "params": { 00:57:02.413 "name": "Nvme$subsystem", 00:57:02.413 "trtype": "$TEST_TRANSPORT", 00:57:02.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:02.413 "adrfam": "ipv4", 00:57:02.413 "trsvcid": "$NVMF_PORT", 00:57:02.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:02.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:02.413 "hdgst": ${hdgst:-false}, 00:57:02.413 "ddgst": ${ddgst:-false} 00:57:02.413 }, 00:57:02.413 "method": "bdev_nvme_attach_controller" 00:57:02.413 } 00:57:02.413 EOF 00:57:02.413 )") 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:02.413 { 00:57:02.413 "params": { 00:57:02.413 "name": "Nvme$subsystem", 00:57:02.413 "trtype": "$TEST_TRANSPORT", 00:57:02.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:02.413 "adrfam": "ipv4", 00:57:02.413 "trsvcid": "$NVMF_PORT", 00:57:02.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:02.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:02.413 "hdgst": ${hdgst:-false}, 00:57:02.413 "ddgst": ${ddgst:-false} 00:57:02.413 }, 00:57:02.413 "method": "bdev_nvme_attach_controller" 00:57:02.413 } 00:57:02.413 EOF 00:57:02.413 )") 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:02.413 { 00:57:02.413 "params": { 00:57:02.413 "name": "Nvme$subsystem", 00:57:02.413 "trtype": "$TEST_TRANSPORT", 00:57:02.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:02.413 "adrfam": "ipv4", 00:57:02.413 "trsvcid": "$NVMF_PORT", 00:57:02.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:02.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:02.413 "hdgst": ${hdgst:-false}, 00:57:02.413 "ddgst": ${ddgst:-false} 00:57:02.413 }, 00:57:02.413 "method": "bdev_nvme_attach_controller" 00:57:02.413 } 00:57:02.413 EOF 00:57:02.413 )") 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:02.413 { 00:57:02.413 "params": { 00:57:02.413 "name": "Nvme$subsystem", 00:57:02.413 "trtype": "$TEST_TRANSPORT", 00:57:02.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:02.413 "adrfam": "ipv4", 00:57:02.413 "trsvcid": "$NVMF_PORT", 00:57:02.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:02.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:02.413 "hdgst": ${hdgst:-false}, 00:57:02.413 "ddgst": ${ddgst:-false} 00:57:02.413 }, 00:57:02.413 "method": "bdev_nvme_attach_controller" 00:57:02.413 } 00:57:02.413 EOF 00:57:02.413 )") 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:02.413 { 00:57:02.413 "params": { 00:57:02.413 "name": "Nvme$subsystem", 00:57:02.413 "trtype": "$TEST_TRANSPORT", 00:57:02.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:02.413 "adrfam": "ipv4", 00:57:02.413 "trsvcid": "$NVMF_PORT", 00:57:02.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:02.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:02.413 "hdgst": ${hdgst:-false}, 00:57:02.413 "ddgst": ${ddgst:-false} 00:57:02.413 }, 00:57:02.413 "method": "bdev_nvme_attach_controller" 00:57:02.413 } 00:57:02.413 EOF 00:57:02.413 )") 00:57:02.413 [2024-12-09 11:08:03.364341] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:57:02.413 [2024-12-09 11:08:03.364402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463676 ] 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:02.413 { 00:57:02.413 "params": { 00:57:02.413 "name": "Nvme$subsystem", 00:57:02.413 "trtype": "$TEST_TRANSPORT", 00:57:02.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:02.413 "adrfam": "ipv4", 00:57:02.413 "trsvcid": "$NVMF_PORT", 00:57:02.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:02.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:02.413 "hdgst": ${hdgst:-false}, 00:57:02.413 "ddgst": ${ddgst:-false} 00:57:02.413 }, 00:57:02.413 "method": "bdev_nvme_attach_controller" 00:57:02.413 } 00:57:02.413 EOF 00:57:02.413 )") 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:02.413 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:02.413 { 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme$subsystem", 00:57:02.414 "trtype": "$TEST_TRANSPORT", 00:57:02.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "$NVMF_PORT", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:02.414 "hdgst": ${hdgst:-false}, 00:57:02.414 "ddgst": ${ddgst:-false} 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 } 00:57:02.414 EOF 00:57:02.414 )") 00:57:02.414 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:57:02.414 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:02.414 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:02.414 { 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme$subsystem", 00:57:02.414 "trtype": "$TEST_TRANSPORT", 00:57:02.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "$NVMF_PORT", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:02.414 "hdgst": ${hdgst:-false}, 00:57:02.414 "ddgst": ${ddgst:-false} 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 } 00:57:02.414 EOF 00:57:02.414 )") 00:57:02.414 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:57:02.414 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:02.414 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:02.414 { 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme$subsystem", 00:57:02.414 "trtype": "$TEST_TRANSPORT", 00:57:02.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "$NVMF_PORT", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:02.414 "hdgst": ${hdgst:-false}, 00:57:02.414 "ddgst": ${ddgst:-false} 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 } 00:57:02.414 EOF 00:57:02.414 )") 00:57:02.414 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:57:02.414 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:57:02.414 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:57:02.414 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme1", 00:57:02.414 "trtype": "tcp", 00:57:02.414 "traddr": "10.0.0.2", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "4420", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:57:02.414 "hdgst": false, 00:57:02.414 "ddgst": false 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 },{ 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme2", 00:57:02.414 "trtype": "tcp", 00:57:02.414 "traddr": "10.0.0.2", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "4420", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:57:02.414 "hdgst": false, 00:57:02.414 "ddgst": false 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 },{ 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme3", 00:57:02.414 "trtype": "tcp", 00:57:02.414 "traddr": "10.0.0.2", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "4420", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:57:02.414 "hdgst": false, 00:57:02.414 "ddgst": false 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 },{ 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme4", 00:57:02.414 "trtype": "tcp", 00:57:02.414 "traddr": "10.0.0.2", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "4420", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:57:02.414 "hdgst": false, 00:57:02.414 "ddgst": false 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 },{ 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme5", 00:57:02.414 "trtype": "tcp", 00:57:02.414 "traddr": "10.0.0.2", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "4420", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:57:02.414 "hdgst": false, 00:57:02.414 "ddgst": false 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 },{ 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme6", 00:57:02.414 "trtype": "tcp", 00:57:02.414 "traddr": "10.0.0.2", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "4420", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:57:02.414 "hdgst": false, 00:57:02.414 "ddgst": false 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 },{ 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme7", 00:57:02.414 "trtype": "tcp", 00:57:02.414 "traddr": "10.0.0.2", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "4420", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:57:02.414 "hdgst": false, 00:57:02.414 "ddgst": false 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 },{ 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme8", 00:57:02.414 "trtype": "tcp", 00:57:02.414 "traddr": "10.0.0.2", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "4420", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:57:02.414 "hdgst": false, 00:57:02.414 "ddgst": false 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 },{ 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme9", 00:57:02.414 "trtype": "tcp", 00:57:02.414 "traddr": "10.0.0.2", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "4420", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:57:02.414 "hdgst": false, 00:57:02.414 "ddgst": false 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 },{ 00:57:02.414 "params": { 00:57:02.414 "name": "Nvme10", 00:57:02.414 "trtype": "tcp", 00:57:02.414 "traddr": "10.0.0.2", 00:57:02.414 "adrfam": "ipv4", 00:57:02.414 "trsvcid": "4420", 00:57:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:57:02.414 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:57:02.414 "hdgst": false, 00:57:02.414 "ddgst": false 00:57:02.414 }, 00:57:02.414 "method": "bdev_nvme_attach_controller" 00:57:02.414 }' 00:57:02.414 [2024-12-09 11:08:03.477928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:02.414 [2024-12-09 11:08:03.529102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:03.795 Running I/O for 1 seconds... 00:57:05.006 1484.00 IOPS, 92.75 MiB/s 00:57:05.006 Latency(us) 00:57:05.006 [2024-12-09T10:08:06.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:05.006 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:05.006 Verification LBA range: start 0x0 length 0x400 00:57:05.006 Nvme1n1 : 1.18 163.37 10.21 0.00 0.00 387355.46 21199.47 317308.22 00:57:05.006 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:05.006 Verification LBA range: start 0x0 length 0x400 00:57:05.006 Nvme2n1 : 1.16 165.15 10.32 0.00 0.00 375531.67 17096.35 306366.55 00:57:05.006 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:05.006 Verification LBA range: start 0x0 length 0x400 00:57:05.006 Nvme3n1 : 1.22 209.79 13.11 0.00 0.00 290069.82 22339.23 304542.94 00:57:05.006 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:05.006 Verification LBA range: start 0x0 length 0x400 00:57:05.006 Nvme4n1 : 1.12 171.76 10.74 0.00 0.00 345401.14 19147.91 311837.38 00:57:05.006 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:05.006 Verification LBA range: start 0x0 length 0x400 00:57:05.006 Nvme5n1 : 1.23 207.69 12.98 0.00 0.00 281754.94 24276.81 320955.44 00:57:05.006 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:05.006 Verification LBA range: start 0x0 length 0x400 00:57:05.006 Nvme6n1 : 1.21 211.02 13.19 0.00 0.00 271053.91 22681.15 277188.79 00:57:05.006 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:05.006 Verification LBA range: start 0x0 length 0x400 00:57:05.006 Nvme7n1 : 1.24 207.16 12.95 0.00 0.00 271202.39 14303.94 284483.23 00:57:05.006 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:05.006 Verification LBA range: start 0x0 length 0x400 00:57:05.006 Nvme8n1 : 1.22 209.13 13.07 0.00 0.00 262371.95 14930.81 317308.22 00:57:05.006 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:05.006 Verification LBA range: start 0x0 length 0x400 00:57:05.006 Nvme9n1 : 1.23 218.50 13.66 0.00 0.00 243486.12 6838.54 297248.50 00:57:05.006 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:05.006 Verification LBA range: start 0x0 length 0x400 00:57:05.006 Nvme10n1 : 1.24 206.39 12.90 0.00 0.00 255370.69 16982.37 324602.66 00:57:05.006 [2024-12-09T10:08:06.182Z] =================================================================================================================== 00:57:05.006 [2024-12-09T10:08:06.182Z] Total : 1969.96 123.12 0.00 0.00 292349.79 6838.54 324602.66 00:57:05.265 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:57:05.265 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:57:05.266 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:57:05.266 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:57:05.266 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:57:05.266 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:05.266 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:57:05.266 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:05.266 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:57:05.266 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:05.266 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:05.266 rmmod nvme_tcp 00:57:05.266 rmmod nvme_fabrics 00:57:05.266 rmmod nvme_keyring 00:57:05.525 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:05.525 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:57:05.525 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:57:05.525 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2463067 ']' 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2463067 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2463067 ']' 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2463067 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2463067 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2463067' 00:57:05.526 killing process with pid 2463067 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2463067 00:57:05.526 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2463067 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:06.095 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:08.003 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:57:08.003 00:57:08.003 real 0m16.878s 00:57:08.003 user 0m36.680s 00:57:08.003 sys 0m6.836s 00:57:08.003 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:08.003 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:57:08.003 ************************************ 00:57:08.003 END TEST nvmf_shutdown_tc1 00:57:08.003 ************************************ 00:57:08.003 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:57:08.003 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:57:08.003 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:08.003 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:57:08.264 ************************************ 00:57:08.264 START TEST nvmf_shutdown_tc2 00:57:08.264 ************************************ 00:57:08.264 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:57:08.264 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:57:08.264 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:57:08.264 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:57:08.265 Found 0000:af:00.0 (0x8086 - 0x159b) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:57:08.265 Found 0000:af:00.1 (0x8086 - 0x159b) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:57:08.265 Found net devices under 0000:af:00.0: cvl_0_0 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:08.265 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:57:08.266 Found net devices under 0000:af:00.1: cvl_0_1 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:57:08.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:08.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:57:08.266 00:57:08.266 --- 10.0.0.2 ping statistics --- 00:57:08.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:08.266 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:57:08.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:08.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:57:08.266 00:57:08.266 --- 10.0.0.1 ping statistics --- 00:57:08.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:08.266 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:08.266 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:57:08.526 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:08.526 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:08.526 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:08.526 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:08.526 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:08.526 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:08.526 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:08.526 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2464501 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2464501 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2464501 ']' 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:08.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:08.527 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:08.527 [2024-12-09 11:08:09.555047] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:57:08.527 [2024-12-09 11:08:09.555128] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:08.527 [2024-12-09 11:08:09.658585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:08.787 [2024-12-09 11:08:09.705099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:08.787 [2024-12-09 11:08:09.705135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:08.787 [2024-12-09 11:08:09.705146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:08.787 [2024-12-09 11:08:09.705156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:08.787 [2024-12-09 11:08:09.705164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:08.787 [2024-12-09 11:08:09.706689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:08.787 [2024-12-09 11:08:09.706710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:08.787 [2024-12-09 11:08:09.706805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:57:08.787 [2024-12-09 11:08:09.706807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:08.787 [2024-12-09 11:08:09.870369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:08.787 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:08.788 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:09.048 Malloc1 00:57:09.048 [2024-12-09 11:08:10.002823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:57:09.048 Malloc2 00:57:09.048 Malloc3 00:57:09.048 Malloc4 00:57:09.048 Malloc5 00:57:09.048 Malloc6 00:57:09.308 Malloc7 00:57:09.308 Malloc8 00:57:09.308 Malloc9 00:57:09.308 Malloc10 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2464722 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2464722 /var/tmp/bdevperf.sock 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2464722 ']' 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:57:09.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:09.308 { 00:57:09.308 "params": { 00:57:09.308 "name": "Nvme$subsystem", 00:57:09.308 "trtype": "$TEST_TRANSPORT", 00:57:09.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:09.308 "adrfam": "ipv4", 00:57:09.308 "trsvcid": "$NVMF_PORT", 00:57:09.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:09.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:09.308 "hdgst": ${hdgst:-false}, 00:57:09.308 "ddgst": ${ddgst:-false} 00:57:09.308 }, 00:57:09.308 "method": "bdev_nvme_attach_controller" 00:57:09.308 } 00:57:09.308 EOF 00:57:09.308 )") 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:57:09.308 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:09.569 { 00:57:09.569 "params": { 00:57:09.569 "name": "Nvme$subsystem", 00:57:09.569 "trtype": "$TEST_TRANSPORT", 00:57:09.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:09.569 "adrfam": "ipv4", 00:57:09.569 "trsvcid": "$NVMF_PORT", 00:57:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:09.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:09.569 "hdgst": ${hdgst:-false}, 00:57:09.569 "ddgst": ${ddgst:-false} 00:57:09.569 }, 00:57:09.569 "method": "bdev_nvme_attach_controller" 00:57:09.569 } 00:57:09.569 EOF 00:57:09.569 )") 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:09.569 { 00:57:09.569 "params": { 00:57:09.569 "name": "Nvme$subsystem", 00:57:09.569 "trtype": "$TEST_TRANSPORT", 00:57:09.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:09.569 "adrfam": "ipv4", 00:57:09.569 "trsvcid": "$NVMF_PORT", 00:57:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:09.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:09.569 "hdgst": ${hdgst:-false}, 00:57:09.569 "ddgst": ${ddgst:-false} 00:57:09.569 }, 00:57:09.569 "method": "bdev_nvme_attach_controller" 00:57:09.569 } 00:57:09.569 EOF 00:57:09.569 )") 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:09.569 { 00:57:09.569 "params": { 00:57:09.569 "name": "Nvme$subsystem", 00:57:09.569 "trtype": "$TEST_TRANSPORT", 00:57:09.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:09.569 "adrfam": "ipv4", 00:57:09.569 "trsvcid": "$NVMF_PORT", 00:57:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:09.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:09.569 "hdgst": ${hdgst:-false}, 00:57:09.569 "ddgst": ${ddgst:-false} 00:57:09.569 }, 00:57:09.569 "method": "bdev_nvme_attach_controller" 00:57:09.569 } 00:57:09.569 EOF 00:57:09.569 )") 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:09.569 { 00:57:09.569 "params": { 00:57:09.569 "name": "Nvme$subsystem", 00:57:09.569 "trtype": "$TEST_TRANSPORT", 00:57:09.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:09.569 "adrfam": "ipv4", 00:57:09.569 "trsvcid": "$NVMF_PORT", 00:57:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:09.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:09.569 "hdgst": ${hdgst:-false}, 00:57:09.569 "ddgst": ${ddgst:-false} 00:57:09.569 }, 00:57:09.569 "method": "bdev_nvme_attach_controller" 00:57:09.569 } 00:57:09.569 EOF 00:57:09.569 )") 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:09.569 { 00:57:09.569 "params": { 00:57:09.569 "name": "Nvme$subsystem", 00:57:09.569 "trtype": "$TEST_TRANSPORT", 00:57:09.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:09.569 "adrfam": "ipv4", 00:57:09.569 "trsvcid": "$NVMF_PORT", 00:57:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:09.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:09.569 "hdgst": ${hdgst:-false}, 00:57:09.569 "ddgst": ${ddgst:-false} 00:57:09.569 }, 00:57:09.569 "method": "bdev_nvme_attach_controller" 00:57:09.569 } 00:57:09.569 EOF 00:57:09.569 )") 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:57:09.569 [2024-12-09 11:08:10.528529] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:57:09.569 [2024-12-09 11:08:10.528611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464722 ] 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:09.569 { 00:57:09.569 "params": { 00:57:09.569 "name": "Nvme$subsystem", 00:57:09.569 "trtype": "$TEST_TRANSPORT", 00:57:09.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:09.569 "adrfam": "ipv4", 00:57:09.569 "trsvcid": "$NVMF_PORT", 00:57:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:09.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:09.569 "hdgst": ${hdgst:-false}, 00:57:09.569 "ddgst": ${ddgst:-false} 00:57:09.569 }, 00:57:09.569 "method": "bdev_nvme_attach_controller" 00:57:09.569 } 00:57:09.569 EOF 00:57:09.569 )") 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:09.569 { 00:57:09.569 "params": { 00:57:09.569 "name": "Nvme$subsystem", 00:57:09.569 "trtype": "$TEST_TRANSPORT", 00:57:09.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:09.569 "adrfam": "ipv4", 00:57:09.569 "trsvcid": "$NVMF_PORT", 00:57:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:09.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:09.569 "hdgst": ${hdgst:-false}, 00:57:09.569 "ddgst": ${ddgst:-false} 00:57:09.569 }, 00:57:09.569 "method": "bdev_nvme_attach_controller" 00:57:09.569 } 00:57:09.569 EOF 00:57:09.569 )") 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:09.569 { 00:57:09.569 "params": { 00:57:09.569 "name": "Nvme$subsystem", 00:57:09.569 "trtype": "$TEST_TRANSPORT", 00:57:09.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:09.569 "adrfam": "ipv4", 00:57:09.569 "trsvcid": "$NVMF_PORT", 00:57:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:09.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:09.569 "hdgst": ${hdgst:-false}, 00:57:09.569 "ddgst": ${ddgst:-false} 00:57:09.569 }, 00:57:09.569 "method": "bdev_nvme_attach_controller" 00:57:09.569 } 00:57:09.569 EOF 00:57:09.569 )") 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:09.569 { 00:57:09.569 "params": { 00:57:09.569 "name": "Nvme$subsystem", 00:57:09.569 "trtype": "$TEST_TRANSPORT", 00:57:09.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:09.569 "adrfam": "ipv4", 00:57:09.569 "trsvcid": "$NVMF_PORT", 00:57:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:09.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:09.569 "hdgst": ${hdgst:-false}, 00:57:09.569 "ddgst": ${ddgst:-false} 00:57:09.569 }, 00:57:09.569 "method": "bdev_nvme_attach_controller" 00:57:09.569 } 00:57:09.569 EOF 00:57:09.569 )") 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:57:09.569 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:57:09.569 "params": { 00:57:09.569 "name": "Nvme1", 00:57:09.569 "trtype": "tcp", 00:57:09.569 "traddr": "10.0.0.2", 00:57:09.570 "adrfam": "ipv4", 00:57:09.570 "trsvcid": "4420", 00:57:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:57:09.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:57:09.570 "hdgst": false, 00:57:09.570 "ddgst": false 00:57:09.570 }, 00:57:09.570 "method": "bdev_nvme_attach_controller" 00:57:09.570 },{ 00:57:09.570 "params": { 00:57:09.570 "name": "Nvme2", 00:57:09.570 "trtype": "tcp", 00:57:09.570 "traddr": "10.0.0.2", 00:57:09.570 "adrfam": "ipv4", 00:57:09.570 "trsvcid": "4420", 00:57:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:57:09.570 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:57:09.570 "hdgst": false, 00:57:09.570 "ddgst": false 00:57:09.570 }, 00:57:09.570 "method": "bdev_nvme_attach_controller" 00:57:09.570 },{ 00:57:09.570 "params": { 00:57:09.570 "name": "Nvme3", 00:57:09.570 "trtype": "tcp", 00:57:09.570 "traddr": "10.0.0.2", 00:57:09.570 "adrfam": "ipv4", 00:57:09.570 "trsvcid": "4420", 00:57:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:57:09.570 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:57:09.570 "hdgst": false, 00:57:09.570 "ddgst": false 00:57:09.570 }, 00:57:09.570 "method": "bdev_nvme_attach_controller" 00:57:09.570 },{ 00:57:09.570 "params": { 00:57:09.570 "name": "Nvme4", 00:57:09.570 "trtype": "tcp", 00:57:09.570 "traddr": "10.0.0.2", 00:57:09.570 "adrfam": "ipv4", 00:57:09.570 "trsvcid": "4420", 00:57:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:57:09.570 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:57:09.570 "hdgst": false, 00:57:09.570 "ddgst": false 00:57:09.570 }, 00:57:09.570 "method": "bdev_nvme_attach_controller" 00:57:09.570 },{ 00:57:09.570 "params": { 00:57:09.570 "name": "Nvme5", 00:57:09.570 "trtype": "tcp", 00:57:09.570 "traddr": "10.0.0.2", 00:57:09.570 "adrfam": "ipv4", 00:57:09.570 "trsvcid": "4420", 00:57:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:57:09.570 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:57:09.570 "hdgst": false, 00:57:09.570 "ddgst": false 00:57:09.570 }, 00:57:09.570 "method": "bdev_nvme_attach_controller" 00:57:09.570 },{ 00:57:09.570 "params": { 00:57:09.570 "name": "Nvme6", 00:57:09.570 "trtype": "tcp", 00:57:09.570 "traddr": "10.0.0.2", 00:57:09.570 "adrfam": "ipv4", 00:57:09.570 "trsvcid": "4420", 00:57:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:57:09.570 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:57:09.570 "hdgst": false, 00:57:09.570 "ddgst": false 00:57:09.570 }, 00:57:09.570 "method": "bdev_nvme_attach_controller" 00:57:09.570 },{ 00:57:09.570 "params": { 00:57:09.570 "name": "Nvme7", 00:57:09.570 "trtype": "tcp", 00:57:09.570 "traddr": "10.0.0.2", 00:57:09.570 "adrfam": "ipv4", 00:57:09.570 "trsvcid": "4420", 00:57:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:57:09.570 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:57:09.570 "hdgst": false, 00:57:09.570 "ddgst": false 00:57:09.570 }, 00:57:09.570 "method": "bdev_nvme_attach_controller" 00:57:09.570 },{ 00:57:09.570 "params": { 00:57:09.570 "name": "Nvme8", 00:57:09.570 "trtype": "tcp", 00:57:09.570 "traddr": "10.0.0.2", 00:57:09.570 "adrfam": "ipv4", 00:57:09.570 "trsvcid": "4420", 00:57:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:57:09.570 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:57:09.570 "hdgst": false, 00:57:09.570 "ddgst": false 00:57:09.570 }, 00:57:09.570 "method": "bdev_nvme_attach_controller" 00:57:09.570 },{ 00:57:09.570 "params": { 00:57:09.570 "name": "Nvme9", 00:57:09.570 "trtype": "tcp", 00:57:09.570 "traddr": "10.0.0.2", 00:57:09.570 "adrfam": "ipv4", 00:57:09.570 "trsvcid": "4420", 00:57:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:57:09.570 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:57:09.570 "hdgst": false, 00:57:09.570 "ddgst": false 00:57:09.570 }, 00:57:09.570 "method": "bdev_nvme_attach_controller" 00:57:09.570 },{ 00:57:09.570 "params": { 00:57:09.570 "name": "Nvme10", 00:57:09.570 "trtype": "tcp", 00:57:09.570 "traddr": "10.0.0.2", 00:57:09.570 "adrfam": "ipv4", 00:57:09.570 "trsvcid": "4420", 00:57:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:57:09.570 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:57:09.570 "hdgst": false, 00:57:09.570 "ddgst": false 00:57:09.570 }, 00:57:09.570 "method": "bdev_nvme_attach_controller" 00:57:09.570 }' 00:57:09.570 [2024-12-09 11:08:10.660244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:09.570 [2024-12-09 11:08:10.712660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:10.951 Running I/O for 10 seconds... 00:57:10.951 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:10.951 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:57:10.951 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:57:10.951 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:10.951 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:57:11.211 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:57:11.485 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:57:11.485 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:57:11.485 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:57:11.485 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:57:11.485 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:11.485 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:11.485 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:11.485 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:57:11.485 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:57:11.485 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2464722 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2464722 ']' 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2464722 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:11.748 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464722 00:57:12.008 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:12.008 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:12.008 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464722' 00:57:12.008 killing process with pid 2464722 00:57:12.008 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2464722 00:57:12.008 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2464722 00:57:12.008 1430.00 IOPS, 89.38 MiB/s [2024-12-09T10:08:13.184Z] Received shutdown signal, test time was about 1.115652 seconds 00:57:12.008 00:57:12.008 Latency(us) 00:57:12.008 [2024-12-09T10:08:13.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:12.008 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:12.008 Verification LBA range: start 0x0 length 0x400 00:57:12.008 Nvme1n1 : 1.08 177.02 11.06 0.00 0.00 357106.20 16868.40 302719.33 00:57:12.008 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:12.008 Verification LBA range: start 0x0 length 0x400 00:57:12.008 Nvme2n1 : 1.09 175.50 10.97 0.00 0.00 352668.27 18122.13 311837.38 00:57:12.008 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:12.008 Verification LBA range: start 0x0 length 0x400 00:57:12.008 Nvme3n1 : 1.11 230.29 14.39 0.00 0.00 263060.26 38523.77 284483.23 00:57:12.008 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:12.008 Verification LBA range: start 0x0 length 0x400 00:57:12.008 Nvme4n1 : 1.11 229.66 14.35 0.00 0.00 258111.67 18122.13 315484.61 00:57:12.008 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:12.008 Verification LBA range: start 0x0 length 0x400 00:57:12.008 Nvme5n1 : 1.11 231.00 14.44 0.00 0.00 250709.04 22111.28 297248.50 00:57:12.008 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:12.008 Verification LBA range: start 0x0 length 0x400 00:57:12.008 Nvme6n1 : 1.08 192.37 12.02 0.00 0.00 289214.31 14816.83 324602.66 00:57:12.008 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:12.008 Verification LBA range: start 0x0 length 0x400 00:57:12.008 Nvme7n1 : 1.08 178.49 11.16 0.00 0.00 308441.27 22909.11 271717.95 00:57:12.008 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:12.008 Verification LBA range: start 0x0 length 0x400 00:57:12.008 Nvme8n1 : 1.07 179.46 11.22 0.00 0.00 299280.47 31229.33 293601.28 00:57:12.008 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:12.008 Verification LBA range: start 0x0 length 0x400 00:57:12.008 Nvme9n1 : 1.10 174.42 10.90 0.00 0.00 302082.97 41943.04 295424.89 00:57:12.008 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:12.008 Verification LBA range: start 0x0 length 0x400 00:57:12.008 Nvme10n1 : 1.10 173.80 10.86 0.00 0.00 295966.87 19831.76 320955.44 00:57:12.008 [2024-12-09T10:08:13.184Z] =================================================================================================================== 00:57:12.008 [2024-12-09T10:08:13.184Z] Total : 1942.01 121.38 0.00 0.00 293958.15 14816.83 324602.66 00:57:12.268 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:57:13.206 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2464501 00:57:13.206 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:57:13.206 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:57:13.206 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:57:13.206 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:57:13.206 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:57:13.206 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:13.206 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:13.465 rmmod nvme_tcp 00:57:13.465 rmmod nvme_fabrics 00:57:13.465 rmmod nvme_keyring 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2464501 ']' 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2464501 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2464501 ']' 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2464501 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464501 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464501' 00:57:13.465 killing process with pid 2464501 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2464501 00:57:13.465 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2464501 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:14.034 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:15.941 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:57:15.941 00:57:15.941 real 0m7.851s 00:57:15.941 user 0m23.546s 00:57:15.941 sys 0m1.713s 00:57:15.941 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:15.942 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:57:15.942 ************************************ 00:57:15.942 END TEST nvmf_shutdown_tc2 00:57:15.942 ************************************ 00:57:15.942 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:57:15.942 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:57:15.942 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:15.942 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:57:16.202 ************************************ 00:57:16.202 START TEST nvmf_shutdown_tc3 00:57:16.202 ************************************ 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:57:16.202 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:57:16.203 Found 0000:af:00.0 (0x8086 - 0x159b) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:57:16.203 Found 0000:af:00.1 (0x8086 - 0x159b) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:57:16.203 Found net devices under 0000:af:00.0: cvl_0_0 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:57:16.203 Found net devices under 0000:af:00.1: cvl_0_1 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:57:16.203 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:57:16.463 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:57:16.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:16.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:57:16.463 00:57:16.463 --- 10.0.0.2 ping statistics --- 00:57:16.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:16.463 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:57:16.463 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:57:16.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:16.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:57:16.463 00:57:16.463 --- 10.0.0.1 ping statistics --- 00:57:16.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:16.463 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:57:16.463 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2465734 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2465734 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2465734 ']' 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:16.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:16.464 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:16.464 [2024-12-09 11:08:17.514332] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:57:16.464 [2024-12-09 11:08:17.514413] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:16.464 [2024-12-09 11:08:17.618040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:16.723 [2024-12-09 11:08:17.665038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:16.723 [2024-12-09 11:08:17.665085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:16.723 [2024-12-09 11:08:17.665096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:16.723 [2024-12-09 11:08:17.665106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:16.723 [2024-12-09 11:08:17.665115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:16.723 [2024-12-09 11:08:17.666714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:16.723 [2024-12-09 11:08:17.666794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:16.723 [2024-12-09 11:08:17.666891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:57:16.723 [2024-12-09 11:08:17.666893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:16.723 [2024-12-09 11:08:17.831162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:16.723 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:16.724 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:57:16.983 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:16.983 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:57:16.983 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:16.983 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:57:16.983 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:57:16.983 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:16.983 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:16.983 Malloc1 00:57:16.983 [2024-12-09 11:08:17.963015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:57:16.983 Malloc2 00:57:16.983 Malloc3 00:57:16.983 Malloc4 00:57:16.983 Malloc5 00:57:16.983 Malloc6 00:57:17.244 Malloc7 00:57:17.244 Malloc8 00:57:17.244 Malloc9 00:57:17.244 Malloc10 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2465947 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2465947 /var/tmp/bdevperf.sock 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2465947 ']' 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:57:17.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:17.244 { 00:57:17.244 "params": { 00:57:17.244 "name": "Nvme$subsystem", 00:57:17.244 "trtype": "$TEST_TRANSPORT", 00:57:17.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:17.244 "adrfam": "ipv4", 00:57:17.244 "trsvcid": "$NVMF_PORT", 00:57:17.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:17.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:17.244 "hdgst": ${hdgst:-false}, 00:57:17.244 "ddgst": ${ddgst:-false} 00:57:17.244 }, 00:57:17.244 "method": "bdev_nvme_attach_controller" 00:57:17.244 } 00:57:17.244 EOF 00:57:17.244 )") 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:17.244 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:17.244 { 00:57:17.244 "params": { 00:57:17.244 "name": "Nvme$subsystem", 00:57:17.244 "trtype": "$TEST_TRANSPORT", 00:57:17.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:17.244 "adrfam": "ipv4", 00:57:17.244 "trsvcid": "$NVMF_PORT", 00:57:17.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:17.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:17.244 "hdgst": ${hdgst:-false}, 00:57:17.244 "ddgst": ${ddgst:-false} 00:57:17.244 }, 00:57:17.244 "method": "bdev_nvme_attach_controller" 00:57:17.244 } 00:57:17.244 EOF 00:57:17.244 )") 00:57:17.504 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:57:17.504 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:17.504 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:17.504 { 00:57:17.504 "params": { 00:57:17.504 "name": "Nvme$subsystem", 00:57:17.504 "trtype": "$TEST_TRANSPORT", 00:57:17.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:17.504 "adrfam": "ipv4", 00:57:17.504 "trsvcid": "$NVMF_PORT", 00:57:17.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:17.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:17.504 "hdgst": ${hdgst:-false}, 00:57:17.504 "ddgst": ${ddgst:-false} 00:57:17.504 }, 00:57:17.504 "method": "bdev_nvme_attach_controller" 00:57:17.504 } 00:57:17.504 EOF 00:57:17.504 )") 00:57:17.504 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:57:17.504 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:17.504 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:17.504 { 00:57:17.504 "params": { 00:57:17.504 "name": "Nvme$subsystem", 00:57:17.504 "trtype": "$TEST_TRANSPORT", 00:57:17.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:17.504 "adrfam": "ipv4", 00:57:17.504 "trsvcid": "$NVMF_PORT", 00:57:17.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:17.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:17.504 "hdgst": ${hdgst:-false}, 00:57:17.504 "ddgst": ${ddgst:-false} 00:57:17.504 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 } 00:57:17.505 EOF 00:57:17.505 )") 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:17.505 { 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme$subsystem", 00:57:17.505 "trtype": "$TEST_TRANSPORT", 00:57:17.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "$NVMF_PORT", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:17.505 "hdgst": ${hdgst:-false}, 00:57:17.505 "ddgst": ${ddgst:-false} 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 } 00:57:17.505 EOF 00:57:17.505 )") 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:17.505 { 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme$subsystem", 00:57:17.505 "trtype": "$TEST_TRANSPORT", 00:57:17.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "$NVMF_PORT", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:17.505 "hdgst": ${hdgst:-false}, 00:57:17.505 "ddgst": ${ddgst:-false} 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 } 00:57:17.505 EOF 00:57:17.505 )") 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:17.505 [2024-12-09 11:08:18.461099] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:57:17.505 [2024-12-09 11:08:18.461174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465947 ] 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:17.505 { 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme$subsystem", 00:57:17.505 "trtype": "$TEST_TRANSPORT", 00:57:17.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "$NVMF_PORT", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:17.505 "hdgst": ${hdgst:-false}, 00:57:17.505 "ddgst": ${ddgst:-false} 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 } 00:57:17.505 EOF 00:57:17.505 )") 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:17.505 { 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme$subsystem", 00:57:17.505 "trtype": "$TEST_TRANSPORT", 00:57:17.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "$NVMF_PORT", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:17.505 "hdgst": ${hdgst:-false}, 00:57:17.505 "ddgst": ${ddgst:-false} 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 } 00:57:17.505 EOF 00:57:17.505 )") 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:17.505 { 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme$subsystem", 00:57:17.505 "trtype": "$TEST_TRANSPORT", 00:57:17.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "$NVMF_PORT", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:17.505 "hdgst": ${hdgst:-false}, 00:57:17.505 "ddgst": ${ddgst:-false} 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 } 00:57:17.505 EOF 00:57:17.505 )") 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:17.505 { 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme$subsystem", 00:57:17.505 "trtype": "$TEST_TRANSPORT", 00:57:17.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "$NVMF_PORT", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:17.505 "hdgst": ${hdgst:-false}, 00:57:17.505 "ddgst": ${ddgst:-false} 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 } 00:57:17.505 EOF 00:57:17.505 )") 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:57:17.505 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme1", 00:57:17.505 "trtype": "tcp", 00:57:17.505 "traddr": "10.0.0.2", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "4420", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:57:17.505 "hdgst": false, 00:57:17.505 "ddgst": false 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 },{ 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme2", 00:57:17.505 "trtype": "tcp", 00:57:17.505 "traddr": "10.0.0.2", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "4420", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:57:17.505 "hdgst": false, 00:57:17.505 "ddgst": false 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 },{ 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme3", 00:57:17.505 "trtype": "tcp", 00:57:17.505 "traddr": "10.0.0.2", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "4420", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:57:17.505 "hdgst": false, 00:57:17.505 "ddgst": false 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 },{ 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme4", 00:57:17.505 "trtype": "tcp", 00:57:17.505 "traddr": "10.0.0.2", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "4420", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:57:17.505 "hdgst": false, 00:57:17.505 "ddgst": false 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 },{ 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme5", 00:57:17.505 "trtype": "tcp", 00:57:17.505 "traddr": "10.0.0.2", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "4420", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:57:17.505 "hdgst": false, 00:57:17.505 "ddgst": false 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 },{ 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme6", 00:57:17.505 "trtype": "tcp", 00:57:17.505 "traddr": "10.0.0.2", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "4420", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:57:17.505 "hdgst": false, 00:57:17.505 "ddgst": false 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 },{ 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme7", 00:57:17.505 "trtype": "tcp", 00:57:17.505 "traddr": "10.0.0.2", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "4420", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:57:17.505 "hdgst": false, 00:57:17.505 "ddgst": false 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 },{ 00:57:17.505 "params": { 00:57:17.505 "name": "Nvme8", 00:57:17.505 "trtype": "tcp", 00:57:17.505 "traddr": "10.0.0.2", 00:57:17.505 "adrfam": "ipv4", 00:57:17.505 "trsvcid": "4420", 00:57:17.505 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:57:17.505 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:57:17.505 "hdgst": false, 00:57:17.505 "ddgst": false 00:57:17.505 }, 00:57:17.505 "method": "bdev_nvme_attach_controller" 00:57:17.505 },{ 00:57:17.505 "params": { 00:57:17.506 "name": "Nvme9", 00:57:17.506 "trtype": "tcp", 00:57:17.506 "traddr": "10.0.0.2", 00:57:17.506 "adrfam": "ipv4", 00:57:17.506 "trsvcid": "4420", 00:57:17.506 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:57:17.506 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:57:17.506 "hdgst": false, 00:57:17.506 "ddgst": false 00:57:17.506 }, 00:57:17.506 "method": "bdev_nvme_attach_controller" 00:57:17.506 },{ 00:57:17.506 "params": { 00:57:17.506 "name": "Nvme10", 00:57:17.506 "trtype": "tcp", 00:57:17.506 "traddr": "10.0.0.2", 00:57:17.506 "adrfam": "ipv4", 00:57:17.506 "trsvcid": "4420", 00:57:17.506 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:57:17.506 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:57:17.506 "hdgst": false, 00:57:17.506 "ddgst": false 00:57:17.506 }, 00:57:17.506 "method": "bdev_nvme_attach_controller" 00:57:17.506 }' 00:57:17.506 [2024-12-09 11:08:18.590278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:17.506 [2024-12-09 11:08:18.642720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:19.415 Running I/O for 10 seconds... 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:57:19.415 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:57:19.674 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:57:19.674 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:57:19.674 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:57:19.674 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:57:19.674 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:19.674 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:19.934 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:19.934 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=64 00:57:19.934 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 64 -ge 100 ']' 00:57:19.934 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:57:20.194 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:57:20.194 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:57:20.194 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:57:20.194 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:57:20.194 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:20.194 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:20.194 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:20.194 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:57:20.194 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:57:20.194 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:57:20.464 1424.00 IOPS, 89.00 MiB/s [2024-12-09T10:08:21.640Z] 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2465734 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2465734 ']' 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2465734 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465734 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465734' 00:57:20.464 killing process with pid 2465734 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2465734 00:57:20.464 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2465734 00:57:20.464 [2024-12-09 11:08:21.584237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716140 is same with the state(6) to be set 00:57:20.464 [2024-12-09 11:08:21.584315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716140 is same with the state(6) to be set 00:57:20.464 [2024-12-09 11:08:21.584326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716140 is same with the state(6) to be set 00:57:20.464 [2024-12-09 11:08:21.584336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716140 is same with the state(6) to be set 00:57:20.464 [2024-12-09 11:08:21.584352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716140 is same with the state(6) to be set 00:57:20.464 [2024-12-09 11:08:21.584361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716140 is same with the state(6) to be set 00:57:20.464 [2024-12-09 11:08:21.584371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716140 is same with the state(6) to be set 00:57:20.464 [2024-12-09 11:08:21.584380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716140 is same with the state(6) to be set 00:57:20.464 [2024-12-09 11:08:21.586389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.464 [2024-12-09 11:08:21.586437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.464 [2024-12-09 11:08:21.586464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.464 [2024-12-09 11:08:21.586480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.464 [2024-12-09 11:08:21.586498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.464 [2024-12-09 11:08:21.586513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.464 [2024-12-09 11:08:21.586531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.464 [2024-12-09 11:08:21.586546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.464 [2024-12-09 11:08:21.586562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.464 [2024-12-09 11:08:21.586577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.464 [2024-12-09 11:08:21.586594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.464 [2024-12-09 11:08:21.586609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.464 [2024-12-09 11:08:21.586626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.586968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.586985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.587000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.587032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.587064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.587096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1[2024-12-09 11:08:21.587116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 he state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t[2024-12-09 11:08:21.587129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.465 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.587154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t[2024-12-09 11:08:21.587163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.465 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-12-09 11:08:21.587186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 he state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1[2024-12-09 11:08:21.587219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 he state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t[2024-12-09 11:08:21.587250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1he state(6) to be set 00:57:20.465 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.587263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.587293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.587303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 he state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.587325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t[2024-12-09 11:08:21.587334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.465 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1[2024-12-09 11:08:21.587357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 he state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t[2024-12-09 11:08:21.587387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1he state(6) to be set 00:57:20.465 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.465 [2024-12-09 11:08:21.587399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.465 [2024-12-09 11:08:21.587409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.465 [2024-12-09 11:08:21.587420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.587472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 he state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t[2024-12-09 11:08:21.587503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.466 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t[2024-12-09 11:08:21.587537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.466 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1[2024-12-09 11:08:21.587561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 he state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t[2024-12-09 11:08:21.587611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.466 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1[2024-12-09 11:08:21.587633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 he state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t[2024-12-09 11:08:21.587687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.466 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1[2024-12-09 11:08:21.587710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 he state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with t[2024-12-09 11:08:21.587740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1he state(6) to be set 00:57:20.466 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716610 is same with the state(6) to be set 00:57:20.466 [2024-12-09 11:08:21.587774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.587980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.587997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.588012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.588029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.588051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.588067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.588082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.588099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.588114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.588130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.588145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.588162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.588177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.588193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.466 [2024-12-09 11:08:21.588208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.466 [2024-12-09 11:08:21.588224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.467 [2024-12-09 11:08:21.588239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.467 [2024-12-09 11:08:21.588270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.467 [2024-12-09 11:08:21.588302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.467 [2024-12-09 11:08:21.588335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.467 [2024-12-09 11:08:21.588366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.467 [2024-12-09 11:08:21.588397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.467 [2024-12-09 11:08:21.588429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.467 [2024-12-09 11:08:21.588460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.467 [2024-12-09 11:08:21.588492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.467 [2024-12-09 11:08:21.588523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:20.467 [2024-12-09 11:08:21.588732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.588752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.588783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.588814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.588844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23550 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.588942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.588963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.588979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.588993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.589009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.589024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.589039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.589053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.589068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aacbd0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.589133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.589145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.589156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.589178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.589189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.589210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.589220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with t[2024-12-09 11:08:21.589231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:57:20.467 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.589248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab76d0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 [2024-12-09 11:08:21.589298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.589318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-09 11:08:21.589328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 he state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.467 [2024-12-09 11:08:21.589348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-09 11:08:21.589359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:57:20.467 he state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.467 [2024-12-09 11:08:21.589374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.468 [2024-12-09 11:08:21.589379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.468 [2024-12-09 11:08:21.589399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 11:08:21.589409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.468 he state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7260 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.589757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716ae0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.468 [2024-12-09 11:08:21.591425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.591671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716fd0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:57:20.469 [2024-12-09 11:08:21.592694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t[2024-12-09 11:08:21.592721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f23550 (9): he state(6) to be set 00:57:20.469 Bad file descriptor 00:57:20.469 [2024-12-09 11:08:21.592735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t[2024-12-09 11:08:21.592781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1he state(6) to be set 00:57:20.469 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.469 [2024-12-09 11:08:21.592795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.469 [2024-12-09 11:08:21.592805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-12-09 11:08:21.592826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.469 he state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.469 [2024-12-09 11:08:21.592847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.469 [2024-12-09 11:08:21.592866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.592876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.469 he state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.469 [2024-12-09 11:08:21.592900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t[2024-12-09 11:08:21.592909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.469 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.469 [2024-12-09 11:08:21.592921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.469 [2024-12-09 11:08:21.592938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.469 [2024-12-09 11:08:21.592943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.592949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.592959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.592961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.592969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.592976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.592979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 he state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.592991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.592994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t[2024-12-09 11:08:21.593010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.470 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128[2024-12-09 11:08:21.593033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 he state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128[2024-12-09 11:08:21.593066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 he state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t[2024-12-09 11:08:21.593115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.470 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-12-09 11:08:21.593137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 he state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-12-09 11:08:21.593204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 he state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-12-09 11:08:21.593238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 he state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7174a0 is same with the state(6) to be set 00:57:20.470 [2024-12-09 11:08:21.593251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.470 [2024-12-09 11:08:21.593783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.470 [2024-12-09 11:08:21.593797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.593815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.593830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.593846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.593861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.593877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.593892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.593909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.593924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.593940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.593955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.593971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.593986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:12[2024-12-09 11:08:21.594223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 he state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t[2024-12-09 11:08:21.594241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.471 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:12[2024-12-09 11:08:21.594263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 he state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:12[2024-12-09 11:08:21.594295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 he state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.594378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 he state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.594411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 he state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.594443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 he state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.594475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 he state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.594511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 he state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t[2024-12-09 11:08:21.594541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.471 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.471 [2024-12-09 11:08:21.594554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.471 [2024-12-09 11:08:21.594559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.471 [2024-12-09 11:08:21.594564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t[2024-12-09 11:08:21.594574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.472 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 [2024-12-09 11:08:21.594587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.472 [2024-12-09 11:08:21.594598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 [2024-12-09 11:08:21.594618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:12[2024-12-09 11:08:21.594628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.472 he state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 [2024-12-09 11:08:21.594654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.472 [2024-12-09 11:08:21.594674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.594685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 he state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.472 [2024-12-09 11:08:21.594705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t[2024-12-09 11:08:21.594715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.472 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 [2024-12-09 11:08:21.594728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:12[2024-12-09 11:08:21.594738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.472 he state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 [2024-12-09 11:08:21.594759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:12[2024-12-09 11:08:21.594770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.472 he state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 [2024-12-09 11:08:21.594792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:12[2024-12-09 11:08:21.594802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.472 he state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 [2024-12-09 11:08:21.594825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.472 [2024-12-09 11:08:21.594847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.594857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 he state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.472 [2024-12-09 11:08:21.594878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.594888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 [2024-12-09 11:08:21.594899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717990 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.596410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.599269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:57:20.472 [2024-12-09 11:08:21.599314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab76d0 (9): Bad file descriptor 00:57:20.472 [2024-12-09 11:08:21.599461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.472 [2024-12-09 11:08:21.599485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f23550 with addr=10.0.0.2, port=4420 00:57:20.472 [2024-12-09 11:08:21.599501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23550 is same with the state(6) to be set 00:57:20.472 [2024-12-09 11:08:21.599553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.472 [2024-12-09 11:08:21.599572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.472 [2024-12-09 11:08:21.599587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.599602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.599618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.599632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.599656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.599671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.599685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc280 is same with the state(6) to be set 00:57:20.473 [2024-12-09 11:08:21.599728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.599755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.599771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.599786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.599801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.599815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.599831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.599845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.599860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8060 is same with the state(6) to be set 00:57:20.473 [2024-12-09 11:08:21.599901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.599918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.599934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.599948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.599964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.599978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.599994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.600009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.600024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc6a0 is same with the state(6) to be set 00:57:20.473 [2024-12-09 11:08:21.600075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aacbd0 (9): Bad file descriptor 00:57:20.473 [2024-12-09 11:08:21.600119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.600136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.600151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.600166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.600181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.600196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.600211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.473 [2024-12-09 11:08:21.600226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.600242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31490 is same with the state(6) to be set 00:57:20.473 [2024-12-09 11:08:21.600269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab7260 (9): Bad file descriptor 00:57:20.473 [2024-12-09 11:08:21.601382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f23550 (9): Bad file descriptor 00:57:20.473 [2024-12-09 11:08:21.601779] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:57:20.473 [2024-12-09 11:08:21.602708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.473 [2024-12-09 11:08:21.602742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab76d0 with addr=10.0.0.2, port=4420 00:57:20.473 [2024-12-09 11:08:21.602759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab76d0 is same with the state(6) to be set 00:57:20.473 [2024-12-09 11:08:21.602776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:57:20.473 [2024-12-09 11:08:21.602790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:57:20.473 [2024-12-09 11:08:21.602806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:57:20.473 [2024-12-09 11:08:21.602821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:57:20.473 [2024-12-09 11:08:21.602896] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:57:20.473 [2024-12-09 11:08:21.602957] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:57:20.473 [2024-12-09 11:08:21.603014] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:57:20.473 [2024-12-09 11:08:21.603073] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:57:20.473 [2024-12-09 11:08:21.603292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab76d0 (9): Bad file descriptor 00:57:20.473 [2024-12-09 11:08:21.603595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:57:20.473 [2024-12-09 11:08:21.603618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:57:20.473 [2024-12-09 11:08:21.603633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:57:20.473 [2024-12-09 11:08:21.603661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:57:20.473 [2024-12-09 11:08:21.607811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:57:20.473 [2024-12-09 11:08:21.608218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.473 [2024-12-09 11:08:21.608249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f23550 with addr=10.0.0.2, port=4420 00:57:20.473 [2024-12-09 11:08:21.608265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23550 is same with the state(6) to be set 00:57:20.473 [2024-12-09 11:08:21.608396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f23550 (9): Bad file descriptor 00:57:20.473 [2024-12-09 11:08:21.608514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:57:20.473 [2024-12-09 11:08:21.608531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:57:20.473 [2024-12-09 11:08:21.608546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:57:20.473 [2024-12-09 11:08:21.608561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:57:20.473 [2024-12-09 11:08:21.609392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc280 (9): Bad file descriptor 00:57:20.473 [2024-12-09 11:08:21.609436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed8060 (9): Bad file descriptor 00:57:20.473 [2024-12-09 11:08:21.609467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc6a0 (9): Bad file descriptor 00:57:20.473 [2024-12-09 11:08:21.609527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f31490 (9): Bad file descriptor 00:57:20.473 [2024-12-09 11:08:21.609749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.473 [2024-12-09 11:08:21.609772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.609793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.473 [2024-12-09 11:08:21.609808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.473 [2024-12-09 11:08:21.609826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.473 [2024-12-09 11:08:21.609841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.609858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.609874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.609891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.609905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.609922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.609937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.609954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.609969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.609986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.610975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.610992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.611006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.611023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.611037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.611054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.611069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.611086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.611100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.474 [2024-12-09 11:08:21.611117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.474 [2024-12-09 11:08:21.611132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.475 [2024-12-09 11:08:21.611807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.475 [2024-12-09 11:08:21.611823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbc680 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.611990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.612327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717d10 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.613139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.613165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.613176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.613186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.475 [2024-12-09 11:08:21.613195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-12-09 11:08:21.613215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 he state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with t[2024-12-09 11:08:21.613299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.476 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-12-09 11:08:21.613321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 he state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-12-09 11:08:21.613353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 he state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.613405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 he state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with t[2024-12-09 11:08:21.613436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.476 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-12-09 11:08:21.613457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 he state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.613538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 he state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with t[2024-12-09 11:08:21.613569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:57:20.476 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with t[2024-12-09 11:08:21.613621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:12he state(6) to be set 00:57:20.476 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 [2024-12-09 11:08:21.613643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 11:08:21.613678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.476 he state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.476 [2024-12-09 11:08:21.613698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.476 [2024-12-09 11:08:21.613708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.613718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.477 [2024-12-09 11:08:21.613726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12[2024-12-09 11:08:21.613728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 he state(6) to be set 00:57:20.477 [2024-12-09 11:08:21.613740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.477 [2024-12-09 11:08:21.613741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.613749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with the state(6) to be set 00:57:20.477 [2024-12-09 11:08:21.613759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7181e0 is same with t[2024-12-09 11:08:21.613759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:12he state(6) to be set 00:57:20.477 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.613775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.613792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.613807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.613824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.613839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.613856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.613870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.613887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.613902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.613918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.613933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.613950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.613964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.613981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.613996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.614434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.614449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.477 [2024-12-09 11:08:21.622618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.477 [2024-12-09 11:08:21.622633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.622978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.622993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.623009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.623024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.623040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4690 is same with the state(6) to be set 00:57:20.478 [2024-12-09 11:08:21.624564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.624599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.624626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.624660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.624684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.624704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.624727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.624747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.624769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.624789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.624812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.624832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.624855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.624875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.624898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.624918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.624941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.624961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.624984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.478 [2024-12-09 11:08:21.625509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.478 [2024-12-09 11:08:21.625529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.625573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.625616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.625664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.625708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.625754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.625799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.625844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.625888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.625932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.625975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.625998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.626970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.626990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.627013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.627033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.627056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.627075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.627098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.627118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.627141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.627161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.627184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.627204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.627226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.627246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.627270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.479 [2024-12-09 11:08:21.627289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.479 [2024-12-09 11:08:21.627314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.480 [2024-12-09 11:08:21.627334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.480 [2024-12-09 11:08:21.627357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.480 [2024-12-09 11:08:21.627378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.480 [2024-12-09 11:08:21.627403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2bb7de0 is same with the state(6) to be set 00:57:20.480 [2024-12-09 11:08:21.627599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:57:20.480 [2024-12-09 11:08:21.627634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:57:20.480 [2024-12-09 11:08:21.627788] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:57:20.480 [2024-12-09 11:08:21.627848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.480 [2024-12-09 11:08:21.627871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.480 [2024-12-09 11:08:21.627894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.480 [2024-12-09 11:08:21.627914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.480 [2024-12-09 11:08:21.627935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.480 [2024-12-09 11:08:21.627956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.480 [2024-12-09 11:08:21.627977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.480 [2024-12-09 11:08:21.627997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.480 [2024-12-09 11:08:21.628017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f16030 is same with the state(6) to be set 00:57:20.480 [2024-12-09 11:08:21.628075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.480 [2024-12-09 11:08:21.628098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.480 [2024-12-09 11:08:21.628119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.480 [2024-12-09 11:08:21.628140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.480 [2024-12-09 11:08:21.628161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.480 [2024-12-09 11:08:21.628181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.480 [2024-12-09 11:08:21.628202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:20.480 [2024-12-09 11:08:21.628222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.480 [2024-12-09 11:08:21.628241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17d00 is same with the state(6) to be set 00:57:20.745 [2024-12-09 11:08:21.630246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:57:20.745 [2024-12-09 11:08:21.630507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.745 [2024-12-09 11:08:21.630542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab7260 with addr=10.0.0.2, port=4420 00:57:20.745 [2024-12-09 11:08:21.630564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7260 is same with the state(6) to be set 00:57:20.745 [2024-12-09 11:08:21.630802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.745 [2024-12-09 11:08:21.630835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aacbd0 with addr=10.0.0.2, port=4420 00:57:20.745 [2024-12-09 11:08:21.630855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aacbd0 is same with the state(6) to be set 00:57:20.745 [2024-12-09 11:08:21.631743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.631771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.631798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.631819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.631842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.631862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.631885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.631905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.631928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.631949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.631971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.631991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.632014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.632033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.632055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.632075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.632098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.632118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.632140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.632159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.632182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.632202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.632224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.632249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.632271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.632291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.632314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.745 [2024-12-09 11:08:21.632334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.745 [2024-12-09 11:08:21.632356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.632961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.632980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.746 [2024-12-09 11:08:21.633923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.746 [2024-12-09 11:08:21.633945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.633965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.633987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.634476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.634498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1780 is same with the state(6) to be set 00:57:20.747 [2024-12-09 11:08:21.636244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.636983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.636999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.747 [2024-12-09 11:08:21.637014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.747 [2024-12-09 11:08:21.637031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.637980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.637994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.638011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.638026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.638042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.638057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.638073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.638088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.638105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.638119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.638136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.638150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.638169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.638184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.638200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.638215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.748 [2024-12-09 11:08:21.638232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.748 [2024-12-09 11:08:21.638247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.638264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.638279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.638294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe8d0 is same with the state(6) to be set 00:57:20.749 [2024-12-09 11:08:21.639741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.639763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.639783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.639798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.639814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.639829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.639846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.639861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.639878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.639893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.639910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.639925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.639942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.639957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.639973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.639988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.749 [2024-12-09 11:08:21.640893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.749 [2024-12-09 11:08:21.640910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.640925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.640942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.640957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.640974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.640988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.750 [2024-12-09 11:08:21.641783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.750 [2024-12-09 11:08:21.641799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296a420 is same with the state(6) to be set 00:57:20.750 [2024-12-09 11:08:21.643315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:57:20.750 [2024-12-09 11:08:21.643346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:57:20.750 [2024-12-09 11:08:21.643368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:57:20.750 [2024-12-09 11:08:21.643388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:57:20.750 [2024-12-09 11:08:21.643654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.751 [2024-12-09 11:08:21.643682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edc6a0 with addr=10.0.0.2, port=4420 00:57:20.751 [2024-12-09 11:08:21.643699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc6a0 is same with the state(6) to be set 00:57:20.751 [2024-12-09 11:08:21.643720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab7260 (9): Bad file descriptor 00:57:20.751 [2024-12-09 11:08:21.643740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aacbd0 (9): Bad file descriptor 00:57:20.751 [2024-12-09 11:08:21.643802] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:57:20.751 [2024-12-09 11:08:21.643826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f16030 (9): Bad file descriptor 00:57:20.751 [2024-12-09 11:08:21.643859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17d00 (9): Bad file descriptor 00:57:20.751 [2024-12-09 11:08:21.643891] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:57:20.751 [2024-12-09 11:08:21.643912] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:57:20.751 [2024-12-09 11:08:21.643936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc6a0 (9): Bad file descriptor 00:57:20.751 [2024-12-09 11:08:21.644407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:57:20.751 [2024-12-09 11:08:21.644626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.751 [2024-12-09 11:08:21.644658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab76d0 with addr=10.0.0.2, port=4420 00:57:20.751 [2024-12-09 11:08:21.644674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab76d0 is same with the state(6) to be set 00:57:20.751 [2024-12-09 11:08:21.644847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.751 [2024-12-09 11:08:21.644868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f23550 with addr=10.0.0.2, port=4420 00:57:20.751 [2024-12-09 11:08:21.644883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23550 is same with the state(6) to be set 00:57:20.751 [2024-12-09 11:08:21.645126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.751 [2024-12-09 11:08:21.645147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f31490 with addr=10.0.0.2, port=4420 00:57:20.751 [2024-12-09 11:08:21.645163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31490 is same with the state(6) to be set 00:57:20.751 [2024-12-09 11:08:21.645347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.751 [2024-12-09 11:08:21.645367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edc280 with addr=10.0.0.2, port=4420 00:57:20.751 [2024-12-09 11:08:21.645382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc280 is same with the state(6) to be set 00:57:20.751 [2024-12-09 11:08:21.645399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:57:20.751 [2024-12-09 11:08:21.645413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:57:20.751 [2024-12-09 11:08:21.645429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:57:20.751 [2024-12-09 11:08:21.645444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:57:20.751 [2024-12-09 11:08:21.645460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:57:20.751 [2024-12-09 11:08:21.645473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:57:20.751 [2024-12-09 11:08:21.645488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:57:20.751 [2024-12-09 11:08:21.645501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:57:20.751 [2024-12-09 11:08:21.646538] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:57:20.751 [2024-12-09 11:08:21.646797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.751 [2024-12-09 11:08:21.646821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed8060 with addr=10.0.0.2, port=4420 00:57:20.751 [2024-12-09 11:08:21.646836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8060 is same with the state(6) to be set 00:57:20.751 [2024-12-09 11:08:21.646856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab76d0 (9): Bad file descriptor 00:57:20.751 [2024-12-09 11:08:21.646875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f23550 (9): Bad file descriptor 00:57:20.751 [2024-12-09 11:08:21.646893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f31490 (9): Bad file descriptor 00:57:20.751 [2024-12-09 11:08:21.646916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc280 (9): Bad file descriptor 00:57:20.751 [2024-12-09 11:08:21.646933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:57:20.751 [2024-12-09 11:08:21.646947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:57:20.751 [2024-12-09 11:08:21.646962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:57:20.751 [2024-12-09 11:08:21.646976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:57:20.751 [2024-12-09 11:08:21.647063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed8060 (9): Bad file descriptor 00:57:20.751 [2024-12-09 11:08:21.647083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:57:20.751 [2024-12-09 11:08:21.647096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:57:20.751 [2024-12-09 11:08:21.647110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:57:20.751 [2024-12-09 11:08:21.647123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:57:20.751 [2024-12-09 11:08:21.647138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:57:20.751 [2024-12-09 11:08:21.647152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:57:20.751 [2024-12-09 11:08:21.647167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:57:20.751 [2024-12-09 11:08:21.647180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:57:20.751 [2024-12-09 11:08:21.647194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:57:20.751 [2024-12-09 11:08:21.647208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:57:20.751 [2024-12-09 11:08:21.647222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:57:20.751 [2024-12-09 11:08:21.647235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:57:20.751 [2024-12-09 11:08:21.647249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:57:20.751 [2024-12-09 11:08:21.647262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:57:20.751 [2024-12-09 11:08:21.647277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:57:20.751 [2024-12-09 11:08:21.647289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:57:20.751 [2024-12-09 11:08:21.647350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:57:20.751 [2024-12-09 11:08:21.647365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:57:20.751 [2024-12-09 11:08:21.647379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:57:20.751 [2024-12-09 11:08:21.647392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:57:20.751 [2024-12-09 11:08:21.653486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.751 [2024-12-09 11:08:21.653515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.751 [2024-12-09 11:08:21.653538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.751 [2024-12-09 11:08:21.653558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.751 [2024-12-09 11:08:21.653575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.751 [2024-12-09 11:08:21.653590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.751 [2024-12-09 11:08:21.653606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.751 [2024-12-09 11:08:21.653621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.751 [2024-12-09 11:08:21.653638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.751 [2024-12-09 11:08:21.653659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.751 [2024-12-09 11:08:21.653676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.751 [2024-12-09 11:08:21.653691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.751 [2024-12-09 11:08:21.653707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.751 [2024-12-09 11:08:21.653722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.751 [2024-12-09 11:08:21.653739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.751 [2024-12-09 11:08:21.653753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.751 [2024-12-09 11:08:21.653770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.751 [2024-12-09 11:08:21.653784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.653801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.653816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.653832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.653847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.653863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.653878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.653895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.653909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.653926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.653941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.653960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.653975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.653992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.752 [2024-12-09 11:08:21.654956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.752 [2024-12-09 11:08:21.654973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.654987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:20.753 [2024-12-09 11:08:21.655398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:20.753 [2024-12-09 11:08:21.655414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2e057e0 is same with the state(6) to be set 00:57:20.753 [2024-12-09 11:08:21.656811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:57:20.753 [2024-12-09 11:08:21.656834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:57:20.753 [2024-12-09 11:08:21.656852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:57:20.753 [2024-12-09 11:08:21.656869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:57:20.753 task offset: 24576 on job bdev=Nvme10n1 fails 00:57:20.753 00:57:20.753 Latency(us) 00:57:20.753 [2024-12-09T10:08:21.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:20.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:20.753 Job: Nvme1n1 ended in about 1.33 seconds with error 00:57:20.753 Verification LBA range: start 0x0 length 0x400 00:57:20.753 Nvme1n1 : 1.33 144.83 9.05 48.28 0.00 328177.36 6981.01 330073.49 00:57:20.753 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:20.753 Job: Nvme2n1 ended in about 1.34 seconds with error 00:57:20.753 Verification LBA range: start 0x0 length 0x400 00:57:20.753 Nvme2n1 : 1.34 146.90 9.18 47.72 0.00 320090.03 10371.78 328249.88 00:57:20.753 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:20.753 Job: Nvme3n1 ended in about 1.35 seconds with error 00:57:20.753 Verification LBA range: start 0x0 length 0x400 00:57:20.753 Nvme3n1 : 1.35 141.98 8.87 47.33 0.00 323459.78 16298.52 326426.27 00:57:20.753 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:20.753 Job: Nvme4n1 ended in about 1.36 seconds with error 00:57:20.753 Verification LBA range: start 0x0 length 0x400 00:57:20.753 Nvme4n1 : 1.36 140.78 8.80 46.93 0.00 320717.25 15614.66 333720.71 00:57:20.753 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:20.753 Job: Nvme5n1 ended in about 1.37 seconds with error 00:57:20.753 Verification LBA range: start 0x0 length 0x400 00:57:20.753 Nvme5n1 : 1.37 140.40 8.78 46.80 0.00 315919.14 24162.84 324602.66 00:57:20.753 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:20.753 Job: Nvme6n1 ended in about 1.37 seconds with error 00:57:20.753 Verification LBA range: start 0x0 length 0x400 00:57:20.753 Nvme6n1 : 1.37 142.96 8.94 46.68 0.00 306274.93 38751.72 302719.33 00:57:20.753 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:20.753 Job: Nvme7n1 ended in about 1.36 seconds with error 00:57:20.753 Verification LBA range: start 0x0 length 0x400 00:57:20.753 Nvme7n1 : 1.36 141.42 8.84 47.14 0.00 302195.09 20401.64 328249.88 00:57:20.753 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:20.753 Job: Nvme8n1 ended in about 1.38 seconds with error 00:57:20.753 Verification LBA range: start 0x0 length 0x400 00:57:20.753 Nvme8n1 : 1.38 187.79 11.74 43.34 0.00 241755.00 14588.88 319131.83 00:57:20.753 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:20.753 Verification LBA range: start 0x0 length 0x400 00:57:20.753 Nvme9n1 : 1.34 196.98 12.31 0.00 0.00 277199.01 1453.19 295424.89 00:57:20.753 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:57:20.753 Job: Nvme10n1 ended in about 1.32 seconds with error 00:57:20.753 Verification LBA range: start 0x0 length 0x400 00:57:20.753 Nvme10n1 : 1.32 145.38 9.09 48.46 0.00 275760.08 18122.13 328249.88 00:57:20.753 [2024-12-09T10:08:21.929Z] =================================================================================================================== 00:57:20.753 [2024-12-09T10:08:21.929Z] Total : 1529.43 95.59 422.67 0.00 299694.87 1453.19 333720.71 00:57:20.753 [2024-12-09 11:08:21.698874] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:57:20.753 [2024-12-09 11:08:21.698937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:57:20.753 [2024-12-09 11:08:21.699488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.753 [2024-12-09 11:08:21.699521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f16030 with addr=10.0.0.2, port=4420 00:57:20.753 [2024-12-09 11:08:21.699540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f16030 is same with the state(6) to be set 00:57:20.753 [2024-12-09 11:08:21.699744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.753 [2024-12-09 11:08:21.699765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aacbd0 with addr=10.0.0.2, port=4420 00:57:20.753 [2024-12-09 11:08:21.699780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aacbd0 is same with the state(6) to be set 00:57:20.753 [2024-12-09 11:08:21.699947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.753 [2024-12-09 11:08:21.699966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab7260 with addr=10.0.0.2, port=4420 00:57:20.753 [2024-12-09 11:08:21.699981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7260 is same with the state(6) to be set 00:57:20.753 [2024-12-09 11:08:21.700178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.753 [2024-12-09 11:08:21.700197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edc6a0 with addr=10.0.0.2, port=4420 00:57:20.753 [2024-12-09 11:08:21.700212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc6a0 is same with the state(6) to be set 00:57:20.753 [2024-12-09 11:08:21.700448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.753 [2024-12-09 11:08:21.700467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17d00 with addr=10.0.0.2, port=4420 00:57:20.753 [2024-12-09 11:08:21.700483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17d00 is same with the state(6) to be set 00:57:20.753 [2024-12-09 11:08:21.700527] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:57:20.753 [2024-12-09 11:08:21.700549] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:57:20.753 [2024-12-09 11:08:21.700570] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:57:20.753 [2024-12-09 11:08:21.700591] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:57:20.753 [2024-12-09 11:08:21.700610] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:57:20.753 [2024-12-09 11:08:21.700973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:57:20.754 [2024-12-09 11:08:21.700996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:57:20.754 [2024-12-09 11:08:21.701013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:57:20.754 [2024-12-09 11:08:21.701031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:57:20.754 [2024-12-09 11:08:21.701047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:57:20.754 [2024-12-09 11:08:21.701128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f16030 (9): Bad file descriptor 00:57:20.754 [2024-12-09 11:08:21.701150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aacbd0 (9): Bad file descriptor 00:57:20.754 [2024-12-09 11:08:21.701168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab7260 (9): Bad file descriptor 00:57:20.754 [2024-12-09 11:08:21.701186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc6a0 (9): Bad file descriptor 00:57:20.754 [2024-12-09 11:08:21.701203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17d00 (9): Bad file descriptor 00:57:20.754 [2024-12-09 11:08:21.701499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.754 [2024-12-09 11:08:21.701521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edc280 with addr=10.0.0.2, port=4420 00:57:20.754 [2024-12-09 11:08:21.701537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc280 is same with the state(6) to be set 00:57:20.754 [2024-12-09 11:08:21.701757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.754 [2024-12-09 11:08:21.701777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f31490 with addr=10.0.0.2, port=4420 00:57:20.754 [2024-12-09 11:08:21.701792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31490 is same with the state(6) to be set 00:57:20.754 [2024-12-09 11:08:21.702005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.754 [2024-12-09 11:08:21.702029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f23550 with addr=10.0.0.2, port=4420 00:57:20.754 [2024-12-09 11:08:21.702044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23550 is same with the state(6) to be set 00:57:20.754 [2024-12-09 11:08:21.702202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.754 [2024-12-09 11:08:21.702222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab76d0 with addr=10.0.0.2, port=4420 00:57:20.754 [2024-12-09 11:08:21.702237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab76d0 is same with the state(6) to be set 00:57:20.754 [2024-12-09 11:08:21.702408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:20.754 [2024-12-09 11:08:21.702427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed8060 with addr=10.0.0.2, port=4420 00:57:20.754 [2024-12-09 11:08:21.702441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8060 is same with the state(6) to be set 00:57:20.754 [2024-12-09 11:08:21.702457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:57:20.754 [2024-12-09 11:08:21.702470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:57:20.754 [2024-12-09 11:08:21.702486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:57:20.754 [2024-12-09 11:08:21.702501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:57:20.754 [2024-12-09 11:08:21.702516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:57:20.754 [2024-12-09 11:08:21.702529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:57:20.754 [2024-12-09 11:08:21.702543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:57:20.754 [2024-12-09 11:08:21.702556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:57:20.754 [2024-12-09 11:08:21.702570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:57:20.754 [2024-12-09 11:08:21.702583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:57:20.754 [2024-12-09 11:08:21.702597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:57:20.754 [2024-12-09 11:08:21.702610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:57:20.754 [2024-12-09 11:08:21.702624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:57:20.754 [2024-12-09 11:08:21.702638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:57:20.754 [2024-12-09 11:08:21.702656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:57:20.754 [2024-12-09 11:08:21.702669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:57:20.754 [2024-12-09 11:08:21.702683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:57:20.754 [2024-12-09 11:08:21.702697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:57:20.754 [2024-12-09 11:08:21.702711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:57:20.754 [2024-12-09 11:08:21.702724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:57:20.754 [2024-12-09 11:08:21.702785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc280 (9): Bad file descriptor 00:57:20.754 [2024-12-09 11:08:21.702810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f31490 (9): Bad file descriptor 00:57:20.754 [2024-12-09 11:08:21.702827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f23550 (9): Bad file descriptor 00:57:20.754 [2024-12-09 11:08:21.702845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab76d0 (9): Bad file descriptor 00:57:20.754 [2024-12-09 11:08:21.702863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed8060 (9): Bad file descriptor 00:57:20.754 [2024-12-09 11:08:21.702904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:57:20.754 [2024-12-09 11:08:21.702919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:57:20.754 [2024-12-09 11:08:21.702933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:57:20.754 [2024-12-09 11:08:21.702946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:57:20.754 [2024-12-09 11:08:21.702960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:57:20.754 [2024-12-09 11:08:21.702974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:57:20.754 [2024-12-09 11:08:21.702988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:57:20.754 [2024-12-09 11:08:21.703001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:57:20.754 [2024-12-09 11:08:21.703015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:57:20.754 [2024-12-09 11:08:21.703029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:57:20.754 [2024-12-09 11:08:21.703043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:57:20.754 [2024-12-09 11:08:21.703055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:57:20.754 [2024-12-09 11:08:21.703070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:57:20.754 [2024-12-09 11:08:21.703084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:57:20.754 [2024-12-09 11:08:21.703098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:57:20.754 [2024-12-09 11:08:21.703110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:57:20.754 [2024-12-09 11:08:21.703125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:57:20.754 [2024-12-09 11:08:21.703139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:57:20.754 [2024-12-09 11:08:21.703155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:57:20.754 [2024-12-09 11:08:21.703167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:57:21.014 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2465947 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2465947 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2465947 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:57:22.120 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:22.121 rmmod nvme_tcp 00:57:22.121 rmmod nvme_fabrics 00:57:22.121 rmmod nvme_keyring 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2465734 ']' 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2465734 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2465734 ']' 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2465734 00:57:22.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2465734) - No such process 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2465734 is not found' 00:57:22.121 Process with pid 2465734 is not found 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:22.121 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:24.160 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:57:24.160 00:57:24.160 real 0m8.167s 00:57:24.160 user 0m20.922s 00:57:24.160 sys 0m1.790s 00:57:24.160 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:24.160 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:57:24.160 ************************************ 00:57:24.160 END TEST nvmf_shutdown_tc3 00:57:24.160 ************************************ 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:57:24.420 ************************************ 00:57:24.420 START TEST nvmf_shutdown_tc4 00:57:24.420 ************************************ 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:57:24.420 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:57:24.421 Found 0000:af:00.0 (0x8086 - 0x159b) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:57:24.421 Found 0000:af:00.1 (0x8086 - 0x159b) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:57:24.421 Found net devices under 0000:af:00.0: cvl_0_0 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:57:24.421 Found net devices under 0000:af:00.1: cvl_0_1 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:57:24.421 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:57:24.681 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:57:24.681 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:57:24.681 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:57:24.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:24.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:57:24.682 00:57:24.682 --- 10.0.0.2 ping statistics --- 00:57:24.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:24.682 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:57:24.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:24.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:57:24.682 00:57:24.682 --- 10.0.0.1 ping statistics --- 00:57:24.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:24.682 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2466986 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2466986 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2466986 ']' 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:24.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:24.682 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:57:24.682 [2024-12-09 11:08:25.754443] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:57:24.682 [2024-12-09 11:08:25.754517] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:24.682 [2024-12-09 11:08:25.855464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:24.942 [2024-12-09 11:08:25.900278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:24.942 [2024-12-09 11:08:25.900320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:24.942 [2024-12-09 11:08:25.900330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:24.942 [2024-12-09 11:08:25.900340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:24.942 [2024-12-09 11:08:25.900348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:24.942 [2024-12-09 11:08:25.901875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:24.942 [2024-12-09 11:08:25.901964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:24.942 [2024-12-09 11:08:25.902071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:57:24.942 [2024-12-09 11:08:25.902073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:57:24.942 [2024-12-09 11:08:26.056934] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:57:24.942 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:25.202 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:57:25.202 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:25.202 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:57:25.202 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:57:25.202 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:57:25.202 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:57:25.202 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:25.202 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:57:25.202 Malloc1 00:57:25.202 [2024-12-09 11:08:26.177327] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:57:25.202 Malloc2 00:57:25.202 Malloc3 00:57:25.202 Malloc4 00:57:25.202 Malloc5 00:57:25.202 Malloc6 00:57:25.461 Malloc7 00:57:25.461 Malloc8 00:57:25.461 Malloc9 00:57:25.461 Malloc10 00:57:25.461 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:25.461 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:57:25.461 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:25.461 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:57:25.461 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2467181 00:57:25.461 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:57:25.461 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:57:25.721 [2024-12-09 11:08:26.728320] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2466986 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2466986 ']' 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2466986 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466986 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466986' 00:57:31.022 killing process with pid 2466986 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2466986 00:57:31.022 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2466986 00:57:31.022 Write completed with error (sct=0, sc=8) 00:57:31.022 Write completed with error (sct=0, sc=8) 00:57:31.022 starting I/O failed: -6 00:57:31.022 Write completed with error (sct=0, sc=8) 00:57:31.022 Write completed with error (sct=0, sc=8) 00:57:31.022 Write completed with error (sct=0, sc=8) 00:57:31.022 Write completed with error (sct=0, sc=8) 00:57:31.022 starting I/O failed: -6 00:57:31.022 Write completed with error (sct=0, sc=8) 00:57:31.022 Write completed with error (sct=0, sc=8) 00:57:31.022 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 [2024-12-09 11:08:31.733661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 [2024-12-09 11:08:31.734860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:57:31.023 starting I/O failed: -6 00:57:31.023 starting I/O failed: -6 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 [2024-12-09 11:08:31.736411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.023 Write completed with error (sct=0, sc=8) 00:57:31.023 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 [2024-12-09 11:08:31.738519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:57:31.024 NVMe io qpair process completion error 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 [2024-12-09 11:08:31.739777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:57:31.024 starting I/O failed: -6 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 [2024-12-09 11:08:31.740877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.024 Write completed with error (sct=0, sc=8) 00:57:31.024 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 [2024-12-09 11:08:31.742240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 [2024-12-09 11:08:31.744769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:31.025 NVMe io qpair process completion error 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 starting I/O failed: -6 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.025 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 [2024-12-09 11:08:31.746183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 [2024-12-09 11:08:31.747315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 [2024-12-09 11:08:31.748701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.026 starting I/O failed: -6 00:57:31.026 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 [2024-12-09 11:08:31.751487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:31.027 NVMe io qpair process completion error 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 [2024-12-09 11:08:31.753025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 [2024-12-09 11:08:31.754127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 starting I/O failed: -6 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.027 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 [2024-12-09 11:08:31.755482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 [2024-12-09 11:08:31.759376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:57:31.028 NVMe io qpair process completion error 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.028 starting I/O failed: -6 00:57:31.028 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 [2024-12-09 11:08:31.760894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 [2024-12-09 11:08:31.761968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 [2024-12-09 11:08:31.763329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.029 starting I/O failed: -6 00:57:31.029 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 [2024-12-09 11:08:31.767225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:57:31.030 NVMe io qpair process completion error 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 [2024-12-09 11:08:31.768748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 [2024-12-09 11:08:31.769849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.030 Write completed with error (sct=0, sc=8) 00:57:31.030 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 [2024-12-09 11:08:31.771194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 [2024-12-09 11:08:31.774028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:57:31.031 NVMe io qpair process completion error 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.031 starting I/O failed: -6 00:57:31.031 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 [2024-12-09 11:08:31.775325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 [2024-12-09 11:08:31.776429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 [2024-12-09 11:08:31.777799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.032 Write completed with error (sct=0, sc=8) 00:57:31.032 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 [2024-12-09 11:08:31.781379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:57:31.033 NVMe io qpair process completion error 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 [2024-12-09 11:08:31.782739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 [2024-12-09 11:08:31.783845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.033 Write completed with error (sct=0, sc=8) 00:57:31.033 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 [2024-12-09 11:08:31.785209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.034 Write completed with error (sct=0, sc=8) 00:57:31.034 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 [2024-12-09 11:08:31.787872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:57:31.035 NVMe io qpair process completion error 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 [2024-12-09 11:08:31.789335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:31.035 starting I/O failed: -6 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 [2024-12-09 11:08:31.790449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.035 Write completed with error (sct=0, sc=8) 00:57:31.035 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 [2024-12-09 11:08:31.791821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:57:31.036 starting I/O failed: -6 00:57:31.036 starting I/O failed: -6 00:57:31.036 starting I/O failed: -6 00:57:31.036 starting I/O failed: -6 00:57:31.036 starting I/O failed: -6 00:57:31.036 starting I/O failed: -6 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 [2024-12-09 11:08:31.795103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:57:31.036 NVMe io qpair process completion error 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 [2024-12-09 11:08:31.796503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 starting I/O failed: -6 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.036 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 [2024-12-09 11:08:31.797754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 [2024-12-09 11:08:31.799073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.037 Write completed with error (sct=0, sc=8) 00:57:31.037 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 Write completed with error (sct=0, sc=8) 00:57:31.038 starting I/O failed: -6 00:57:31.038 [2024-12-09 11:08:31.803876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:57:31.038 NVMe io qpair process completion error 00:57:31.038 Initializing NVMe Controllers 00:57:31.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:57:31.038 Controller IO queue size 128, less than required. 00:57:31.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:57:31.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:57:31.038 Controller IO queue size 128, less than required. 00:57:31.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:57:31.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:57:31.038 Controller IO queue size 128, less than required. 00:57:31.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:57:31.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:57:31.038 Controller IO queue size 128, less than required. 00:57:31.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:57:31.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:57:31.038 Controller IO queue size 128, less than required. 00:57:31.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:57:31.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:57:31.038 Controller IO queue size 128, less than required. 00:57:31.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:57:31.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:57:31.038 Controller IO queue size 128, less than required. 00:57:31.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:57:31.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:57:31.038 Controller IO queue size 128, less than required. 00:57:31.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:57:31.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:57:31.038 Controller IO queue size 128, less than required. 00:57:31.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:57:31.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:57:31.038 Controller IO queue size 128, less than required. 00:57:31.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:57:31.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:57:31.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:57:31.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:57:31.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:57:31.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:57:31.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:57:31.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:57:31.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:57:31.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:57:31.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:57:31.038 Initialization complete. Launching workers. 00:57:31.038 ======================================================== 00:57:31.038 Latency(us) 00:57:31.038 Device Information : IOPS MiB/s Average min max 00:57:31.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1547.10 66.48 82745.76 944.91 167251.51 00:57:31.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1539.07 66.13 81992.01 1113.52 161130.87 00:57:31.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1502.52 64.56 84005.33 943.26 156785.07 00:57:31.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1558.08 66.95 81034.48 1173.09 157559.27 00:57:31.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1577.52 67.78 80067.20 967.77 145570.59 00:57:31.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1580.90 67.93 79948.20 1069.98 158240.76 00:57:31.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1577.52 67.78 80165.46 967.95 157930.36 00:57:31.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1587.45 68.21 79692.07 906.23 157352.11 00:57:31.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1578.15 67.81 80201.49 1146.36 148095.26 00:57:31.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1585.12 68.11 79880.90 841.50 148344.46 00:57:31.038 ======================================================== 00:57:31.038 Total : 15633.42 671.75 80951.12 841.50 167251.51 00:57:31.038 00:57:31.038 [2024-12-09 11:08:31.808841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0740 is same with the state(6) to be set 00:57:31.038 [2024-12-09 11:08:31.808908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1690 is same with the state(6) to be set 00:57:31.039 [2024-12-09 11:08:31.808955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf890 is same with the state(6) to be set 00:57:31.039 [2024-12-09 11:08:31.809002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfbc0 is same with the state(6) to be set 00:57:31.039 [2024-12-09 11:08:31.809047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc19c0 is same with the state(6) to be set 00:57:31.039 [2024-12-09 11:08:31.809093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9e870 is same with the state(6) to be set 00:57:31.039 [2024-12-09 11:08:31.809140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0a70 is same with the state(6) to be set 00:57:31.039 [2024-12-09 11:08:31.809185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0410 is same with the state(6) to be set 00:57:31.039 [2024-12-09 11:08:31.809234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf560 is same with the state(6) to be set 00:57:31.039 [2024-12-09 11:08:31.809280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfef0 is same with the state(6) to be set 00:57:31.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:57:31.039 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2467181 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2467181 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2467181 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:57:31.978 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:32.238 rmmod nvme_tcp 00:57:32.238 rmmod nvme_fabrics 00:57:32.238 rmmod nvme_keyring 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2466986 ']' 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2466986 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2466986 ']' 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2466986 00:57:32.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2466986) - No such process 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2466986 is not found' 00:57:32.238 Process with pid 2466986 is not found 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:32.238 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:34.147 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:57:34.147 00:57:34.147 real 0m9.921s 00:57:34.147 user 0m25.324s 00:57:34.147 sys 0m5.396s 00:57:34.147 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:34.147 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:57:34.147 ************************************ 00:57:34.147 END TEST nvmf_shutdown_tc4 00:57:34.147 ************************************ 00:57:34.406 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:57:34.406 00:57:34.406 real 0m43.462s 00:57:34.406 user 1m46.776s 00:57:34.406 sys 0m16.127s 00:57:34.406 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:34.406 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:57:34.406 ************************************ 00:57:34.406 END TEST nvmf_shutdown 00:57:34.406 ************************************ 00:57:34.406 11:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@70 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:57:34.407 11:08:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:34.407 11:08:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:34.407 11:08:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:57:34.407 ************************************ 00:57:34.407 START TEST nvmf_nsid 00:57:34.407 ************************************ 00:57:34.407 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:57:34.407 * Looking for test storage... 00:57:34.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:57:34.407 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:34.407 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:34.407 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:34.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:34.667 --rc genhtml_branch_coverage=1 00:57:34.667 --rc genhtml_function_coverage=1 00:57:34.667 --rc genhtml_legend=1 00:57:34.667 --rc geninfo_all_blocks=1 00:57:34.667 --rc geninfo_unexecuted_blocks=1 00:57:34.667 00:57:34.667 ' 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:34.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:34.667 --rc genhtml_branch_coverage=1 00:57:34.667 --rc genhtml_function_coverage=1 00:57:34.667 --rc genhtml_legend=1 00:57:34.667 --rc geninfo_all_blocks=1 00:57:34.667 --rc geninfo_unexecuted_blocks=1 00:57:34.667 00:57:34.667 ' 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:34.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:34.667 --rc genhtml_branch_coverage=1 00:57:34.667 --rc genhtml_function_coverage=1 00:57:34.667 --rc genhtml_legend=1 00:57:34.667 --rc geninfo_all_blocks=1 00:57:34.667 --rc geninfo_unexecuted_blocks=1 00:57:34.667 00:57:34.667 ' 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:34.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:34.667 --rc genhtml_branch_coverage=1 00:57:34.667 --rc genhtml_function_coverage=1 00:57:34.667 --rc genhtml_legend=1 00:57:34.667 --rc geninfo_all_blocks=1 00:57:34.667 --rc geninfo_unexecuted_blocks=1 00:57:34.667 00:57:34.667 ' 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:34.667 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:34.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:57:34.668 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:57:41.243 Found 0000:af:00.0 (0x8086 - 0x159b) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:57:41.243 Found 0000:af:00.1 (0x8086 - 0x159b) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:57:41.243 Found net devices under 0000:af:00.0: cvl_0_0 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:57:41.243 Found net devices under 0000:af:00.1: cvl_0_1 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:57:41.243 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:57:41.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:41.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:57:41.244 00:57:41.244 --- 10.0.0.2 ping statistics --- 00:57:41.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:41.244 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:57:41.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:41.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:57:41.244 00:57:41.244 --- 10.0.0.1 ping statistics --- 00:57:41.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:41.244 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2471000 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2471000 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2471000 ']' 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:41.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:57:41.244 [2024-12-09 11:08:41.577450] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:57:41.244 [2024-12-09 11:08:41.577526] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:41.244 [2024-12-09 11:08:41.709007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:41.244 [2024-12-09 11:08:41.761461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:41.244 [2024-12-09 11:08:41.761512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:41.244 [2024-12-09 11:08:41.761528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:41.244 [2024-12-09 11:08:41.761542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:41.244 [2024-12-09 11:08:41.761553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:41.244 [2024-12-09 11:08:41.762191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2471174 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=18e1ef6f-64cc-43d6-af89-69158fd755d6 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=5b2eb38f-3636-45cc-8f66-8534b2af0783 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=02224a02-d454-4c93-bbf6-e18a99420b5c 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:41.244 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:57:41.244 null0 00:57:41.244 null1 00:57:41.244 null2 00:57:41.244 [2024-12-09 11:08:41.996658] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:57:41.244 [2024-12-09 11:08:41.996735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471174 ] 00:57:41.244 [2024-12-09 11:08:41.998146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:41.244 [2024-12-09 11:08:42.022356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2471174 /var/tmp/tgt2.sock 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2471174 ']' 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:57:41.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:57:41.244 [2024-12-09 11:08:42.095486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:41.244 [2024-12-09 11:08:42.144873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:57:41.244 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:57:41.503 [2024-12-09 11:08:42.679157] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:41.763 [2024-12-09 11:08:42.695253] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:57:41.763 nvme0n1 nvme0n2 00:57:41.763 nvme1n1 00:57:41.763 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:57:41.763 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:57:41.763 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c 00:57:42.702 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:57:42.702 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:57:42.702 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:57:42.702 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:57:42.702 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:57:42.702 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:57:42.702 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:57:42.702 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:57:42.703 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:57:42.703 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:57:42.703 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:57:42.703 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:57:42.703 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 18e1ef6f-64cc-43d6-af89-69158fd755d6 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=18e1ef6f64cc43d6af8969158fd755d6 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 18E1EF6F64CC43D6AF8969158FD755D6 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 18E1EF6F64CC43D6AF8969158FD755D6 == \1\8\E\1\E\F\6\F\6\4\C\C\4\3\D\6\A\F\8\9\6\9\1\5\8\F\D\7\5\5\D\6 ]] 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 5b2eb38f-3636-45cc-8f66-8534b2af0783 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5b2eb38f363645cc8f668534b2af0783 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5B2EB38F363645CC8F668534B2AF0783 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 5B2EB38F363645CC8F668534B2AF0783 == \5\B\2\E\B\3\8\F\3\6\3\6\4\5\C\C\8\F\6\6\8\5\3\4\B\2\A\F\0\7\8\3 ]] 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 02224a02-d454-4c93-bbf6-e18a99420b5c 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:57:43.641 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:57:43.901 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=02224a02d4544c93bbf6e18a99420b5c 00:57:43.901 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 02224A02D4544C93BBF6E18A99420B5C 00:57:43.901 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 02224A02D4544C93BBF6E18A99420B5C == \0\2\2\2\4\A\0\2\D\4\5\4\4\C\9\3\B\B\F\6\E\1\8\A\9\9\4\2\0\B\5\C ]] 00:57:43.901 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:57:43.901 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:57:43.901 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:57:43.901 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2471174 00:57:43.901 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2471174 ']' 00:57:43.901 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2471174 00:57:43.901 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:57:43.901 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:43.901 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471174 00:57:44.161 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:44.161 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:44.161 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471174' 00:57:44.161 killing process with pid 2471174 00:57:44.161 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2471174 00:57:44.161 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2471174 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:44.420 rmmod nvme_tcp 00:57:44.420 rmmod nvme_fabrics 00:57:44.420 rmmod nvme_keyring 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2471000 ']' 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2471000 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2471000 ']' 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2471000 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:44.420 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471000 00:57:44.680 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:44.680 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:44.680 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471000' 00:57:44.680 killing process with pid 2471000 00:57:44.680 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2471000 00:57:44.680 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2471000 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:44.940 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:46.849 11:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:57:46.849 00:57:46.849 real 0m12.538s 00:57:46.849 user 0m9.779s 00:57:46.849 sys 0m5.696s 00:57:46.849 11:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:46.849 11:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:57:46.849 ************************************ 00:57:46.849 END TEST nvmf_nsid 00:57:46.849 ************************************ 00:57:47.109 11:08:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@72 -- # trap - SIGINT SIGTERM EXIT 00:57:47.109 00:57:47.109 real 13m30.501s 00:57:47.109 user 28m39.998s 00:57:47.109 sys 4m21.181s 00:57:47.109 11:08:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:47.109 11:08:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:57:47.109 ************************************ 00:57:47.109 END TEST nvmf_target_extra 00:57:47.109 ************************************ 00:57:47.109 11:08:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:57:47.109 11:08:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:47.109 11:08:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:47.109 11:08:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:57:47.109 ************************************ 00:57:47.109 START TEST nvmf_host 00:57:47.109 ************************************ 00:57:47.109 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:57:47.109 * Looking for test storage... 00:57:47.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:57:47.109 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:47.109 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:57:47.109 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:47.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.370 --rc genhtml_branch_coverage=1 00:57:47.370 --rc genhtml_function_coverage=1 00:57:47.370 --rc genhtml_legend=1 00:57:47.370 --rc geninfo_all_blocks=1 00:57:47.370 --rc geninfo_unexecuted_blocks=1 00:57:47.370 00:57:47.370 ' 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:47.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.370 --rc genhtml_branch_coverage=1 00:57:47.370 --rc genhtml_function_coverage=1 00:57:47.370 --rc genhtml_legend=1 00:57:47.370 --rc geninfo_all_blocks=1 00:57:47.370 --rc geninfo_unexecuted_blocks=1 00:57:47.370 00:57:47.370 ' 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:47.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.370 --rc genhtml_branch_coverage=1 00:57:47.370 --rc genhtml_function_coverage=1 00:57:47.370 --rc genhtml_legend=1 00:57:47.370 --rc geninfo_all_blocks=1 00:57:47.370 --rc geninfo_unexecuted_blocks=1 00:57:47.370 00:57:47.370 ' 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:47.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.370 --rc genhtml_branch_coverage=1 00:57:47.370 --rc genhtml_function_coverage=1 00:57:47.370 --rc genhtml_legend=1 00:57:47.370 --rc geninfo_all_blocks=1 00:57:47.370 --rc geninfo_unexecuted_blocks=1 00:57:47.370 00:57:47.370 ' 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:47.370 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:47.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:57:47.371 ************************************ 00:57:47.371 START TEST nvmf_multicontroller 00:57:47.371 ************************************ 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:57:47.371 * Looking for test storage... 00:57:47.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:57:47.371 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:47.631 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:47.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.631 --rc genhtml_branch_coverage=1 00:57:47.631 --rc genhtml_function_coverage=1 00:57:47.631 --rc genhtml_legend=1 00:57:47.631 --rc geninfo_all_blocks=1 00:57:47.631 --rc geninfo_unexecuted_blocks=1 00:57:47.631 00:57:47.631 ' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:47.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.632 --rc genhtml_branch_coverage=1 00:57:47.632 --rc genhtml_function_coverage=1 00:57:47.632 --rc genhtml_legend=1 00:57:47.632 --rc geninfo_all_blocks=1 00:57:47.632 --rc geninfo_unexecuted_blocks=1 00:57:47.632 00:57:47.632 ' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:47.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.632 --rc genhtml_branch_coverage=1 00:57:47.632 --rc genhtml_function_coverage=1 00:57:47.632 --rc genhtml_legend=1 00:57:47.632 --rc geninfo_all_blocks=1 00:57:47.632 --rc geninfo_unexecuted_blocks=1 00:57:47.632 00:57:47.632 ' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:47.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.632 --rc genhtml_branch_coverage=1 00:57:47.632 --rc genhtml_function_coverage=1 00:57:47.632 --rc genhtml_legend=1 00:57:47.632 --rc geninfo_all_blocks=1 00:57:47.632 --rc geninfo_unexecuted_blocks=1 00:57:47.632 00:57:47.632 ' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:47.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:57:47.632 11:08:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:57:54.208 Found 0000:af:00.0 (0x8086 - 0x159b) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:57:54.208 Found 0000:af:00.1 (0x8086 - 0x159b) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:57:54.208 Found net devices under 0000:af:00.0: cvl_0_0 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:57:54.208 Found net devices under 0000:af:00.1: cvl_0_1 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:57:54.208 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:57:54.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:54.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:57:54.469 00:57:54.469 --- 10.0.0.2 ping statistics --- 00:57:54.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:54.469 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:57:54.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:54.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:57:54.469 00:57:54.469 --- 10.0.0.1 ping statistics --- 00:57:54.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:54.469 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2474916 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2474916 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2474916 ']' 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:54.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:54.469 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.469 [2024-12-09 11:08:55.547659] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:57:54.469 [2024-12-09 11:08:55.547737] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:54.729 [2024-12-09 11:08:55.650382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:57:54.729 [2024-12-09 11:08:55.696557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:54.729 [2024-12-09 11:08:55.696599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:54.729 [2024-12-09 11:08:55.696610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:54.729 [2024-12-09 11:08:55.696636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:54.729 [2024-12-09 11:08:55.696650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:54.729 [2024-12-09 11:08:55.698120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:54.729 [2024-12-09 11:08:55.698207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:54.729 [2024-12-09 11:08:55.698209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.729 [2024-12-09 11:08:55.863384] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.729 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.989 Malloc0 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.989 [2024-12-09 11:08:55.938408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.989 [2024-12-09 11:08:55.950352] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.989 Malloc1 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.989 11:08:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2475065 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2475065 /var/tmp/bdevperf.sock 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2475065 ']' 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:57:54.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:54.989 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:55.248 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:55.248 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:57:55.248 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:57:55.248 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:55.248 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:55.507 NVMe0n1 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:55.507 1 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:55.507 request: 00:57:55.507 { 00:57:55.507 "name": "NVMe0", 00:57:55.507 "trtype": "tcp", 00:57:55.507 "traddr": "10.0.0.2", 00:57:55.507 "adrfam": "ipv4", 00:57:55.507 "trsvcid": "4420", 00:57:55.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:57:55.507 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:57:55.507 "hostaddr": "10.0.0.1", 00:57:55.507 "prchk_reftag": false, 00:57:55.507 "prchk_guard": false, 00:57:55.507 "hdgst": false, 00:57:55.507 "ddgst": false, 00:57:55.507 "allow_unrecognized_csi": false, 00:57:55.507 "method": "bdev_nvme_attach_controller", 00:57:55.507 "req_id": 1 00:57:55.507 } 00:57:55.507 Got JSON-RPC error response 00:57:55.507 response: 00:57:55.507 { 00:57:55.507 "code": -114, 00:57:55.507 "message": "A controller named NVMe0 already exists with the specified network path" 00:57:55.507 } 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:57:55.507 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:55.508 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:57:55.508 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:55.508 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:55.508 request: 00:57:55.508 { 00:57:55.508 "name": "NVMe0", 00:57:55.508 "trtype": "tcp", 00:57:55.508 "traddr": "10.0.0.2", 00:57:55.508 "adrfam": "ipv4", 00:57:55.508 "trsvcid": "4420", 00:57:55.508 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:57:55.508 "hostaddr": "10.0.0.1", 00:57:55.508 "prchk_reftag": false, 00:57:55.508 "prchk_guard": false, 00:57:55.508 "hdgst": false, 00:57:55.508 "ddgst": false, 00:57:55.508 "allow_unrecognized_csi": false, 00:57:55.508 "method": "bdev_nvme_attach_controller", 00:57:55.508 "req_id": 1 00:57:55.508 } 00:57:55.508 Got JSON-RPC error response 00:57:55.508 response: 00:57:55.508 { 00:57:55.508 "code": -114, 00:57:55.508 "message": "A controller named NVMe0 already exists with the specified network path" 00:57:55.508 } 00:57:55.508 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:57:55.508 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:55.767 request: 00:57:55.767 { 00:57:55.767 "name": "NVMe0", 00:57:55.767 "trtype": "tcp", 00:57:55.767 "traddr": "10.0.0.2", 00:57:55.767 "adrfam": "ipv4", 00:57:55.767 "trsvcid": "4420", 00:57:55.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:57:55.767 "hostaddr": "10.0.0.1", 00:57:55.767 "prchk_reftag": false, 00:57:55.767 "prchk_guard": false, 00:57:55.767 "hdgst": false, 00:57:55.767 "ddgst": false, 00:57:55.767 "multipath": "disable", 00:57:55.767 "allow_unrecognized_csi": false, 00:57:55.767 "method": "bdev_nvme_attach_controller", 00:57:55.767 "req_id": 1 00:57:55.767 } 00:57:55.767 Got JSON-RPC error response 00:57:55.767 response: 00:57:55.767 { 00:57:55.767 "code": -114, 00:57:55.767 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:57:55.767 } 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:55.767 request: 00:57:55.767 { 00:57:55.767 "name": "NVMe0", 00:57:55.767 "trtype": "tcp", 00:57:55.767 "traddr": "10.0.0.2", 00:57:55.767 "adrfam": "ipv4", 00:57:55.767 "trsvcid": "4420", 00:57:55.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:57:55.767 "hostaddr": "10.0.0.1", 00:57:55.767 "prchk_reftag": false, 00:57:55.767 "prchk_guard": false, 00:57:55.767 "hdgst": false, 00:57:55.767 "ddgst": false, 00:57:55.767 "multipath": "failover", 00:57:55.767 "allow_unrecognized_csi": false, 00:57:55.767 "method": "bdev_nvme_attach_controller", 00:57:55.767 "req_id": 1 00:57:55.767 } 00:57:55.767 Got JSON-RPC error response 00:57:55.767 response: 00:57:55.767 { 00:57:55.767 "code": -114, 00:57:55.767 "message": "A controller named NVMe0 already exists with the specified network path" 00:57:55.767 } 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:55.767 NVMe0n1 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:55.767 11:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:56.027 00:57:56.027 11:08:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:56.027 11:08:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:57:56.027 11:08:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:56.027 11:08:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:57:56.027 11:08:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:56.027 11:08:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:56.027 11:08:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:57:56.027 11:08:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:57:57.408 { 00:57:57.408 "results": [ 00:57:57.408 { 00:57:57.408 "job": "NVMe0n1", 00:57:57.408 "core_mask": "0x1", 00:57:57.408 "workload": "write", 00:57:57.408 "status": "finished", 00:57:57.408 "queue_depth": 128, 00:57:57.408 "io_size": 4096, 00:57:57.408 "runtime": 1.00616, 00:57:57.408 "iops": 24812.157112188917, 00:57:57.408 "mibps": 96.92248871948796, 00:57:57.408 "io_failed": 0, 00:57:57.408 "io_timeout": 0, 00:57:57.408 "avg_latency_us": 5146.393606910544, 00:57:57.408 "min_latency_us": 2350.7478260869566, 00:57:57.408 "max_latency_us": 8548.173913043478 00:57:57.408 } 00:57:57.408 ], 00:57:57.408 "core_count": 1 00:57:57.408 } 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2475065 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2475065 ']' 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2475065 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2475065 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2475065' 00:57:57.408 killing process with pid 2475065 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2475065 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2475065 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:57.408 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:57:57.409 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:57.409 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:57:57.409 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:57.409 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:57:57.409 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:57:57.409 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:57:57.668 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:57:57.668 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:57:57.668 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:57:57.668 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:57:57.668 [2024-12-09 11:08:56.081061] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:57:57.668 [2024-12-09 11:08:56.081147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475065 ] 00:57:57.668 [2024-12-09 11:08:56.209877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:57.668 [2024-12-09 11:08:56.261160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:57.668 [2024-12-09 11:08:57.042627] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 467667c6-f94f-4a62-bf35-12b86c4219fc already exists 00:57:57.668 [2024-12-09 11:08:57.042673] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:467667c6-f94f-4a62-bf35-12b86c4219fc alias for bdev NVMe1n1 00:57:57.668 [2024-12-09 11:08:57.042690] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:57:57.668 Running I/O for 1 seconds... 00:57:57.668 24773.00 IOPS, 96.77 MiB/s 00:57:57.668 Latency(us) 00:57:57.668 [2024-12-09T10:08:58.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:57.668 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:57:57.668 NVMe0n1 : 1.01 24812.16 96.92 0.00 0.00 5146.39 2350.75 8548.17 00:57:57.668 [2024-12-09T10:08:58.844Z] =================================================================================================================== 00:57:57.668 [2024-12-09T10:08:58.844Z] Total : 24812.16 96.92 0.00 0.00 5146.39 2350.75 8548.17 00:57:57.668 Received shutdown signal, test time was about 1.000000 seconds 00:57:57.668 00:57:57.668 Latency(us) 00:57:57.668 [2024-12-09T10:08:58.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:57.668 [2024-12-09T10:08:58.844Z] =================================================================================================================== 00:57:57.668 [2024-12-09T10:08:58.844Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:57:57.668 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:57:57.668 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:57:57.668 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:57.669 rmmod nvme_tcp 00:57:57.669 rmmod nvme_fabrics 00:57:57.669 rmmod nvme_keyring 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2474916 ']' 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2474916 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2474916 ']' 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2474916 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2474916 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2474916' 00:57:57.669 killing process with pid 2474916 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2474916 00:57:57.669 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2474916 00:57:57.928 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:57.928 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:57.928 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:57.928 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:57:57.928 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:57:57.929 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:57.929 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:57:57.929 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:57.929 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:57.929 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:57.929 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:57.929 11:08:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:58:00.481 00:58:00.481 real 0m12.676s 00:58:00.481 user 0m14.684s 00:58:00.481 sys 0m6.024s 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:58:00.481 ************************************ 00:58:00.481 END TEST nvmf_multicontroller 00:58:00.481 ************************************ 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:58:00.481 ************************************ 00:58:00.481 START TEST nvmf_aer 00:58:00.481 ************************************ 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:58:00.481 * Looking for test storage... 00:58:00.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:00.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:00.481 --rc genhtml_branch_coverage=1 00:58:00.481 --rc genhtml_function_coverage=1 00:58:00.481 --rc genhtml_legend=1 00:58:00.481 --rc geninfo_all_blocks=1 00:58:00.481 --rc geninfo_unexecuted_blocks=1 00:58:00.481 00:58:00.481 ' 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:00.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:00.481 --rc genhtml_branch_coverage=1 00:58:00.481 --rc genhtml_function_coverage=1 00:58:00.481 --rc genhtml_legend=1 00:58:00.481 --rc geninfo_all_blocks=1 00:58:00.481 --rc geninfo_unexecuted_blocks=1 00:58:00.481 00:58:00.481 ' 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:00.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:00.481 --rc genhtml_branch_coverage=1 00:58:00.481 --rc genhtml_function_coverage=1 00:58:00.481 --rc genhtml_legend=1 00:58:00.481 --rc geninfo_all_blocks=1 00:58:00.481 --rc geninfo_unexecuted_blocks=1 00:58:00.481 00:58:00.481 ' 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:00.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:00.481 --rc genhtml_branch_coverage=1 00:58:00.481 --rc genhtml_function_coverage=1 00:58:00.481 --rc genhtml_legend=1 00:58:00.481 --rc geninfo_all_blocks=1 00:58:00.481 --rc geninfo_unexecuted_blocks=1 00:58:00.481 00:58:00.481 ' 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:00.481 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:00.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:58:00.482 11:09:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:58:07.060 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:58:07.061 Found 0000:af:00.0 (0x8086 - 0x159b) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:58:07.061 Found 0000:af:00.1 (0x8086 - 0x159b) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:58:07.061 Found net devices under 0000:af:00.0: cvl_0_0 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:58:07.061 Found net devices under 0000:af:00.1: cvl_0_1 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:58:07.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:07.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:58:07.061 00:58:07.061 --- 10.0.0.2 ping statistics --- 00:58:07.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:07.061 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:58:07.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:07.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:58:07.061 00:58:07.061 --- 10.0.0.1 ping statistics --- 00:58:07.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:07.061 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2478629 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2478629 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2478629 ']' 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:07.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:07.061 11:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.061 [2024-12-09 11:09:07.927054] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:58:07.061 [2024-12-09 11:09:07.927107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:07.061 [2024-12-09 11:09:08.040865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:58:07.061 [2024-12-09 11:09:08.094728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:07.062 [2024-12-09 11:09:08.094775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:07.062 [2024-12-09 11:09:08.094791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:07.062 [2024-12-09 11:09:08.094805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:07.062 [2024-12-09 11:09:08.094816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:07.062 [2024-12-09 11:09:08.096515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:07.062 [2024-12-09 11:09:08.096604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:58:07.062 [2024-12-09 11:09:08.096697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:58:07.062 [2024-12-09 11:09:08.096703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:07.062 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:07.062 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:58:07.062 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:58:07.062 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:07.062 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.322 [2024-12-09 11:09:08.263236] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.322 Malloc0 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.322 [2024-12-09 11:09:08.330855] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.322 [ 00:58:07.322 { 00:58:07.322 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:58:07.322 "subtype": "Discovery", 00:58:07.322 "listen_addresses": [], 00:58:07.322 "allow_any_host": true, 00:58:07.322 "hosts": [] 00:58:07.322 }, 00:58:07.322 { 00:58:07.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:58:07.322 "subtype": "NVMe", 00:58:07.322 "listen_addresses": [ 00:58:07.322 { 00:58:07.322 "trtype": "TCP", 00:58:07.322 "adrfam": "IPv4", 00:58:07.322 "traddr": "10.0.0.2", 00:58:07.322 "trsvcid": "4420" 00:58:07.322 } 00:58:07.322 ], 00:58:07.322 "allow_any_host": true, 00:58:07.322 "hosts": [], 00:58:07.322 "serial_number": "SPDK00000000000001", 00:58:07.322 "model_number": "SPDK bdev Controller", 00:58:07.322 "max_namespaces": 2, 00:58:07.322 "min_cntlid": 1, 00:58:07.322 "max_cntlid": 65519, 00:58:07.322 "namespaces": [ 00:58:07.322 { 00:58:07.322 "nsid": 1, 00:58:07.322 "bdev_name": "Malloc0", 00:58:07.322 "name": "Malloc0", 00:58:07.322 "nguid": "855F1D60F89540518E077BD9C71C226D", 00:58:07.322 "uuid": "855f1d60-f895-4051-8e07-7bd9c71c226d" 00:58:07.322 } 00:58:07.322 ] 00:58:07.322 } 00:58:07.322 ] 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2478809 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:58:07.322 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.582 Malloc1 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.582 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.582 [ 00:58:07.582 { 00:58:07.582 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:58:07.582 "subtype": "Discovery", 00:58:07.582 "listen_addresses": [], 00:58:07.582 "allow_any_host": true, 00:58:07.582 "hosts": [] 00:58:07.582 }, 00:58:07.582 { 00:58:07.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:58:07.582 "subtype": "NVMe", 00:58:07.582 "listen_addresses": [ 00:58:07.582 { 00:58:07.582 "trtype": "TCP", 00:58:07.582 "adrfam": "IPv4", 00:58:07.582 "traddr": "10.0.0.2", 00:58:07.842 Asynchronous Event Request test 00:58:07.842 Attaching to 10.0.0.2 00:58:07.842 Attached to 10.0.0.2 00:58:07.842 Registering asynchronous event callbacks... 00:58:07.842 Starting namespace attribute notice tests for all controllers... 00:58:07.842 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:58:07.842 aer_cb - Changed Namespace 00:58:07.842 Cleaning up... 00:58:07.842 "trsvcid": "4420" 00:58:07.842 } 00:58:07.842 ], 00:58:07.842 "allow_any_host": true, 00:58:07.842 "hosts": [], 00:58:07.842 "serial_number": "SPDK00000000000001", 00:58:07.842 "model_number": "SPDK bdev Controller", 00:58:07.842 "max_namespaces": 2, 00:58:07.842 "min_cntlid": 1, 00:58:07.842 "max_cntlid": 65519, 00:58:07.842 "namespaces": [ 00:58:07.842 { 00:58:07.842 "nsid": 1, 00:58:07.842 "bdev_name": "Malloc0", 00:58:07.842 "name": "Malloc0", 00:58:07.842 "nguid": "855F1D60F89540518E077BD9C71C226D", 00:58:07.842 "uuid": "855f1d60-f895-4051-8e07-7bd9c71c226d" 00:58:07.842 }, 00:58:07.842 { 00:58:07.842 "nsid": 2, 00:58:07.842 "bdev_name": "Malloc1", 00:58:07.842 "name": "Malloc1", 00:58:07.842 "nguid": "4D203DD50792437C84B84EFB3A3E0D23", 00:58:07.842 "uuid": "4d203dd5-0792-437c-84b8-4efb3a3e0d23" 00:58:07.842 } 00:58:07.842 ] 00:58:07.842 } 00:58:07.842 ] 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2478809 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:07.842 rmmod nvme_tcp 00:58:07.842 rmmod nvme_fabrics 00:58:07.842 rmmod nvme_keyring 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2478629 ']' 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2478629 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2478629 ']' 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2478629 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:07.842 11:09:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478629 00:58:08.102 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:08.102 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:08.102 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478629' 00:58:08.102 killing process with pid 2478629 00:58:08.102 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2478629 00:58:08.102 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2478629 00:58:08.102 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:08.102 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:08.102 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:08.102 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:58:08.362 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:58:08.362 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:08.362 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:58:08.362 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:08.362 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:58:08.362 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:08.362 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:08.362 11:09:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:10.271 11:09:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:58:10.271 00:58:10.271 real 0m10.221s 00:58:10.271 user 0m6.658s 00:58:10.271 sys 0m5.355s 00:58:10.271 11:09:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:10.271 11:09:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:58:10.271 ************************************ 00:58:10.271 END TEST nvmf_aer 00:58:10.271 ************************************ 00:58:10.271 11:09:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:58:10.271 11:09:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:10.271 11:09:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:10.271 11:09:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:58:10.531 ************************************ 00:58:10.531 START TEST nvmf_async_init 00:58:10.531 ************************************ 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:58:10.531 * Looking for test storage... 00:58:10.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:10.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:10.531 --rc genhtml_branch_coverage=1 00:58:10.531 --rc genhtml_function_coverage=1 00:58:10.531 --rc genhtml_legend=1 00:58:10.531 --rc geninfo_all_blocks=1 00:58:10.531 --rc geninfo_unexecuted_blocks=1 00:58:10.531 00:58:10.531 ' 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:10.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:10.531 --rc genhtml_branch_coverage=1 00:58:10.531 --rc genhtml_function_coverage=1 00:58:10.531 --rc genhtml_legend=1 00:58:10.531 --rc geninfo_all_blocks=1 00:58:10.531 --rc geninfo_unexecuted_blocks=1 00:58:10.531 00:58:10.531 ' 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:10.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:10.531 --rc genhtml_branch_coverage=1 00:58:10.531 --rc genhtml_function_coverage=1 00:58:10.531 --rc genhtml_legend=1 00:58:10.531 --rc geninfo_all_blocks=1 00:58:10.531 --rc geninfo_unexecuted_blocks=1 00:58:10.531 00:58:10.531 ' 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:10.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:10.531 --rc genhtml_branch_coverage=1 00:58:10.531 --rc genhtml_function_coverage=1 00:58:10.531 --rc genhtml_legend=1 00:58:10.531 --rc geninfo_all_blocks=1 00:58:10.531 --rc geninfo_unexecuted_blocks=1 00:58:10.531 00:58:10.531 ' 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:10.531 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:10.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:58:10.532 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=28863819a866482e9e69b5e04dbb0a8a 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:58:10.792 11:09:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.373 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:58:17.373 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:58:17.373 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:58:17.373 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:58:17.373 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:58:17.373 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:58:17.373 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:58:17.373 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:58:17.373 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:58:17.373 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:58:17.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:58:17.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:58:17.374 Found net devices under 0000:af:00.0: cvl_0_0 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:58:17.374 Found net devices under 0000:af:00.1: cvl_0_1 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:58:17.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:17.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:58:17.374 00:58:17.374 --- 10.0.0.2 ping statistics --- 00:58:17.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:17.374 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:58:17.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:17.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:58:17.374 00:58:17.374 --- 10.0.0.1 ping statistics --- 00:58:17.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:17.374 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2482483 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2482483 00:58:17.374 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:58:17.375 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2482483 ']' 00:58:17.375 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:17.375 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:17.375 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:17.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:17.375 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:17.375 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.375 [2024-12-09 11:09:18.342727] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:58:17.375 [2024-12-09 11:09:18.342784] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:17.375 [2024-12-09 11:09:18.458181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:17.375 [2024-12-09 11:09:18.509818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:17.375 [2024-12-09 11:09:18.509865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:17.375 [2024-12-09 11:09:18.509880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:17.375 [2024-12-09 11:09:18.509894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:17.375 [2024-12-09 11:09:18.509906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:17.375 [2024-12-09 11:09:18.510515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.634 [2024-12-09 11:09:18.655613] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.634 null0 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:17.634 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 28863819a866482e9e69b5e04dbb0a8a 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.635 [2024-12-09 11:09:18.695878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:17.635 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.895 nvme0n1 00:58:17.895 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:17.895 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:58:17.895 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:17.895 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.895 [ 00:58:17.895 { 00:58:17.895 "name": "nvme0n1", 00:58:17.895 "aliases": [ 00:58:17.895 "28863819-a866-482e-9e69-b5e04dbb0a8a" 00:58:17.895 ], 00:58:17.895 "product_name": "NVMe disk", 00:58:17.895 "block_size": 512, 00:58:17.895 "num_blocks": 2097152, 00:58:17.895 "uuid": "28863819-a866-482e-9e69-b5e04dbb0a8a", 00:58:17.895 "numa_id": 1, 00:58:17.895 "assigned_rate_limits": { 00:58:17.895 "rw_ios_per_sec": 0, 00:58:17.895 "rw_mbytes_per_sec": 0, 00:58:17.895 "r_mbytes_per_sec": 0, 00:58:17.895 "w_mbytes_per_sec": 0 00:58:17.895 }, 00:58:17.895 "claimed": false, 00:58:17.895 "zoned": false, 00:58:17.895 "supported_io_types": { 00:58:17.895 "read": true, 00:58:17.895 "write": true, 00:58:17.895 "unmap": false, 00:58:17.895 "flush": true, 00:58:17.895 "reset": true, 00:58:17.895 "nvme_admin": true, 00:58:17.895 "nvme_io": true, 00:58:17.895 "nvme_io_md": false, 00:58:17.895 "write_zeroes": true, 00:58:17.895 "zcopy": false, 00:58:17.895 "get_zone_info": false, 00:58:17.895 "zone_management": false, 00:58:17.895 "zone_append": false, 00:58:17.895 "compare": true, 00:58:17.895 "compare_and_write": true, 00:58:17.895 "abort": true, 00:58:17.895 "seek_hole": false, 00:58:17.895 "seek_data": false, 00:58:17.895 "copy": true, 00:58:17.895 "nvme_iov_md": false 00:58:17.895 }, 00:58:17.895 "memory_domains": [ 00:58:17.895 { 00:58:17.895 "dma_device_id": "system", 00:58:17.895 "dma_device_type": 1 00:58:17.895 } 00:58:17.895 ], 00:58:17.895 "driver_specific": { 00:58:17.895 "nvme": [ 00:58:17.895 { 00:58:17.895 "trid": { 00:58:17.895 "trtype": "TCP", 00:58:17.895 "adrfam": "IPv4", 00:58:17.895 "traddr": "10.0.0.2", 00:58:17.895 "trsvcid": "4420", 00:58:17.895 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:58:17.895 }, 00:58:17.895 "ctrlr_data": { 00:58:17.895 "cntlid": 1, 00:58:17.895 "vendor_id": "0x8086", 00:58:17.895 "model_number": "SPDK bdev Controller", 00:58:17.895 "serial_number": "00000000000000000000", 00:58:17.895 "firmware_revision": "25.01", 00:58:17.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:58:17.895 "oacs": { 00:58:17.895 "security": 0, 00:58:17.895 "format": 0, 00:58:17.895 "firmware": 0, 00:58:17.895 "ns_manage": 0 00:58:17.895 }, 00:58:17.895 "multi_ctrlr": true, 00:58:17.895 "ana_reporting": false 00:58:17.895 }, 00:58:17.895 "vs": { 00:58:17.895 "nvme_version": "1.3" 00:58:17.895 }, 00:58:17.895 "ns_data": { 00:58:17.895 "id": 1, 00:58:17.895 "can_share": true 00:58:17.895 } 00:58:17.895 } 00:58:17.895 ], 00:58:17.895 "mp_policy": "active_passive" 00:58:17.895 } 00:58:17.895 } 00:58:17.895 ] 00:58:17.895 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:17.895 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:58:17.895 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:17.895 11:09:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:17.895 [2024-12-09 11:09:18.949505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:58:17.895 [2024-12-09 11:09:18.949586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6bb70 (9): Bad file descriptor 00:58:18.156 [2024-12-09 11:09:19.081779] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:18.156 [ 00:58:18.156 { 00:58:18.156 "name": "nvme0n1", 00:58:18.156 "aliases": [ 00:58:18.156 "28863819-a866-482e-9e69-b5e04dbb0a8a" 00:58:18.156 ], 00:58:18.156 "product_name": "NVMe disk", 00:58:18.156 "block_size": 512, 00:58:18.156 "num_blocks": 2097152, 00:58:18.156 "uuid": "28863819-a866-482e-9e69-b5e04dbb0a8a", 00:58:18.156 "numa_id": 1, 00:58:18.156 "assigned_rate_limits": { 00:58:18.156 "rw_ios_per_sec": 0, 00:58:18.156 "rw_mbytes_per_sec": 0, 00:58:18.156 "r_mbytes_per_sec": 0, 00:58:18.156 "w_mbytes_per_sec": 0 00:58:18.156 }, 00:58:18.156 "claimed": false, 00:58:18.156 "zoned": false, 00:58:18.156 "supported_io_types": { 00:58:18.156 "read": true, 00:58:18.156 "write": true, 00:58:18.156 "unmap": false, 00:58:18.156 "flush": true, 00:58:18.156 "reset": true, 00:58:18.156 "nvme_admin": true, 00:58:18.156 "nvme_io": true, 00:58:18.156 "nvme_io_md": false, 00:58:18.156 "write_zeroes": true, 00:58:18.156 "zcopy": false, 00:58:18.156 "get_zone_info": false, 00:58:18.156 "zone_management": false, 00:58:18.156 "zone_append": false, 00:58:18.156 "compare": true, 00:58:18.156 "compare_and_write": true, 00:58:18.156 "abort": true, 00:58:18.156 "seek_hole": false, 00:58:18.156 "seek_data": false, 00:58:18.156 "copy": true, 00:58:18.156 "nvme_iov_md": false 00:58:18.156 }, 00:58:18.156 "memory_domains": [ 00:58:18.156 { 00:58:18.156 "dma_device_id": "system", 00:58:18.156 "dma_device_type": 1 00:58:18.156 } 00:58:18.156 ], 00:58:18.156 "driver_specific": { 00:58:18.156 "nvme": [ 00:58:18.156 { 00:58:18.156 "trid": { 00:58:18.156 "trtype": "TCP", 00:58:18.156 "adrfam": "IPv4", 00:58:18.156 "traddr": "10.0.0.2", 00:58:18.156 "trsvcid": "4420", 00:58:18.156 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:58:18.156 }, 00:58:18.156 "ctrlr_data": { 00:58:18.156 "cntlid": 2, 00:58:18.156 "vendor_id": "0x8086", 00:58:18.156 "model_number": "SPDK bdev Controller", 00:58:18.156 "serial_number": "00000000000000000000", 00:58:18.156 "firmware_revision": "25.01", 00:58:18.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:58:18.156 "oacs": { 00:58:18.156 "security": 0, 00:58:18.156 "format": 0, 00:58:18.156 "firmware": 0, 00:58:18.156 "ns_manage": 0 00:58:18.156 }, 00:58:18.156 "multi_ctrlr": true, 00:58:18.156 "ana_reporting": false 00:58:18.156 }, 00:58:18.156 "vs": { 00:58:18.156 "nvme_version": "1.3" 00:58:18.156 }, 00:58:18.156 "ns_data": { 00:58:18.156 "id": 1, 00:58:18.156 "can_share": true 00:58:18.156 } 00:58:18.156 } 00:58:18.156 ], 00:58:18.156 "mp_policy": "active_passive" 00:58:18.156 } 00:58:18.156 } 00:58:18.156 ] 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.E88J94obZI 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.E88J94obZI 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.E88J94obZI 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:18.156 [2024-12-09 11:09:19.146207] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:58:18.156 [2024-12-09 11:09:19.146350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:18.156 [2024-12-09 11:09:19.162270] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:58:18.156 nvme0n1 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:18.156 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:18.156 [ 00:58:18.156 { 00:58:18.156 "name": "nvme0n1", 00:58:18.156 "aliases": [ 00:58:18.156 "28863819-a866-482e-9e69-b5e04dbb0a8a" 00:58:18.156 ], 00:58:18.156 "product_name": "NVMe disk", 00:58:18.156 "block_size": 512, 00:58:18.156 "num_blocks": 2097152, 00:58:18.156 "uuid": "28863819-a866-482e-9e69-b5e04dbb0a8a", 00:58:18.156 "numa_id": 1, 00:58:18.156 "assigned_rate_limits": { 00:58:18.156 "rw_ios_per_sec": 0, 00:58:18.156 "rw_mbytes_per_sec": 0, 00:58:18.156 "r_mbytes_per_sec": 0, 00:58:18.156 "w_mbytes_per_sec": 0 00:58:18.156 }, 00:58:18.156 "claimed": false, 00:58:18.156 "zoned": false, 00:58:18.156 "supported_io_types": { 00:58:18.156 "read": true, 00:58:18.156 "write": true, 00:58:18.156 "unmap": false, 00:58:18.156 "flush": true, 00:58:18.156 "reset": true, 00:58:18.157 "nvme_admin": true, 00:58:18.157 "nvme_io": true, 00:58:18.157 "nvme_io_md": false, 00:58:18.157 "write_zeroes": true, 00:58:18.157 "zcopy": false, 00:58:18.157 "get_zone_info": false, 00:58:18.157 "zone_management": false, 00:58:18.157 "zone_append": false, 00:58:18.157 "compare": true, 00:58:18.157 "compare_and_write": true, 00:58:18.157 "abort": true, 00:58:18.157 "seek_hole": false, 00:58:18.157 "seek_data": false, 00:58:18.157 "copy": true, 00:58:18.157 "nvme_iov_md": false 00:58:18.157 }, 00:58:18.157 "memory_domains": [ 00:58:18.157 { 00:58:18.157 "dma_device_id": "system", 00:58:18.157 "dma_device_type": 1 00:58:18.157 } 00:58:18.157 ], 00:58:18.157 "driver_specific": { 00:58:18.157 "nvme": [ 00:58:18.157 { 00:58:18.157 "trid": { 00:58:18.157 "trtype": "TCP", 00:58:18.157 "adrfam": "IPv4", 00:58:18.157 "traddr": "10.0.0.2", 00:58:18.157 "trsvcid": "4421", 00:58:18.157 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:58:18.157 }, 00:58:18.157 "ctrlr_data": { 00:58:18.157 "cntlid": 3, 00:58:18.157 "vendor_id": "0x8086", 00:58:18.157 "model_number": "SPDK bdev Controller", 00:58:18.157 "serial_number": "00000000000000000000", 00:58:18.157 "firmware_revision": "25.01", 00:58:18.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:58:18.157 "oacs": { 00:58:18.157 "security": 0, 00:58:18.157 "format": 0, 00:58:18.157 "firmware": 0, 00:58:18.157 "ns_manage": 0 00:58:18.157 }, 00:58:18.157 "multi_ctrlr": true, 00:58:18.157 "ana_reporting": false 00:58:18.157 }, 00:58:18.157 "vs": { 00:58:18.157 "nvme_version": "1.3" 00:58:18.157 }, 00:58:18.157 "ns_data": { 00:58:18.157 "id": 1, 00:58:18.157 "can_share": true 00:58:18.157 } 00:58:18.157 } 00:58:18.157 ], 00:58:18.157 "mp_policy": "active_passive" 00:58:18.157 } 00:58:18.157 } 00:58:18.157 ] 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.E88J94obZI 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:18.157 rmmod nvme_tcp 00:58:18.157 rmmod nvme_fabrics 00:58:18.157 rmmod nvme_keyring 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2482483 ']' 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2482483 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2482483 ']' 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2482483 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:58:18.157 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:18.421 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2482483 00:58:18.421 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:18.421 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:18.421 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2482483' 00:58:18.421 killing process with pid 2482483 00:58:18.421 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2482483 00:58:18.421 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2482483 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:18.681 11:09:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:20.588 11:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:58:20.588 00:58:20.588 real 0m10.270s 00:58:20.588 user 0m3.552s 00:58:20.588 sys 0m5.301s 00:58:20.588 11:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:20.588 11:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:58:20.588 ************************************ 00:58:20.588 END TEST nvmf_async_init 00:58:20.588 ************************************ 00:58:20.848 11:09:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:58:20.848 11:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:20.848 11:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:20.848 11:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:58:20.848 ************************************ 00:58:20.848 START TEST dma 00:58:20.848 ************************************ 00:58:20.848 11:09:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:58:20.848 * Looking for test storage... 00:58:20.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:58:20.848 11:09:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:20.848 11:09:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:58:20.848 11:09:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:21.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:21.108 --rc genhtml_branch_coverage=1 00:58:21.108 --rc genhtml_function_coverage=1 00:58:21.108 --rc genhtml_legend=1 00:58:21.108 --rc geninfo_all_blocks=1 00:58:21.108 --rc geninfo_unexecuted_blocks=1 00:58:21.108 00:58:21.108 ' 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:21.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:21.108 --rc genhtml_branch_coverage=1 00:58:21.108 --rc genhtml_function_coverage=1 00:58:21.108 --rc genhtml_legend=1 00:58:21.108 --rc geninfo_all_blocks=1 00:58:21.108 --rc geninfo_unexecuted_blocks=1 00:58:21.108 00:58:21.108 ' 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:21.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:21.108 --rc genhtml_branch_coverage=1 00:58:21.108 --rc genhtml_function_coverage=1 00:58:21.108 --rc genhtml_legend=1 00:58:21.108 --rc geninfo_all_blocks=1 00:58:21.108 --rc geninfo_unexecuted_blocks=1 00:58:21.108 00:58:21.108 ' 00:58:21.108 11:09:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:21.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:21.109 --rc genhtml_branch_coverage=1 00:58:21.109 --rc genhtml_function_coverage=1 00:58:21.109 --rc genhtml_legend=1 00:58:21.109 --rc geninfo_all_blocks=1 00:58:21.109 --rc geninfo_unexecuted_blocks=1 00:58:21.109 00:58:21.109 ' 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:21.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:58:21.109 00:58:21.109 real 0m0.252s 00:58:21.109 user 0m0.159s 00:58:21.109 sys 0m0.109s 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:58:21.109 ************************************ 00:58:21.109 END TEST dma 00:58:21.109 ************************************ 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:58:21.109 ************************************ 00:58:21.109 START TEST nvmf_identify 00:58:21.109 ************************************ 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:58:21.109 * Looking for test storage... 00:58:21.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:58:21.109 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:58:21.370 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:21.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:21.371 --rc genhtml_branch_coverage=1 00:58:21.371 --rc genhtml_function_coverage=1 00:58:21.371 --rc genhtml_legend=1 00:58:21.371 --rc geninfo_all_blocks=1 00:58:21.371 --rc geninfo_unexecuted_blocks=1 00:58:21.371 00:58:21.371 ' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:21.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:21.371 --rc genhtml_branch_coverage=1 00:58:21.371 --rc genhtml_function_coverage=1 00:58:21.371 --rc genhtml_legend=1 00:58:21.371 --rc geninfo_all_blocks=1 00:58:21.371 --rc geninfo_unexecuted_blocks=1 00:58:21.371 00:58:21.371 ' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:21.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:21.371 --rc genhtml_branch_coverage=1 00:58:21.371 --rc genhtml_function_coverage=1 00:58:21.371 --rc genhtml_legend=1 00:58:21.371 --rc geninfo_all_blocks=1 00:58:21.371 --rc geninfo_unexecuted_blocks=1 00:58:21.371 00:58:21.371 ' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:21.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:21.371 --rc genhtml_branch_coverage=1 00:58:21.371 --rc genhtml_function_coverage=1 00:58:21.371 --rc genhtml_legend=1 00:58:21.371 --rc geninfo_all_blocks=1 00:58:21.371 --rc geninfo_unexecuted_blocks=1 00:58:21.371 00:58:21.371 ' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:21.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:58:21.371 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:27.954 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:58:27.954 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:58:27.954 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:58:27.954 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:58:27.954 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:58:27.954 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:58:27.954 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:58:27.954 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:58:27.954 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:58:27.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:58:27.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:58:27.955 Found net devices under 0000:af:00.0: cvl_0_0 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:58:27.955 Found net devices under 0000:af:00.1: cvl_0_1 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:58:27.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:27.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:58:27.955 00:58:27.955 --- 10.0.0.2 ping statistics --- 00:58:27.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:27.955 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:58:27.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:27.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:58:27.955 00:58:27.955 --- 10.0.0.1 ping statistics --- 00:58:27.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:27.955 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2485959 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2485959 00:58:27.955 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2485959 ']' 00:58:27.956 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:27.956 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:27.956 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:27.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:27.956 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:27.956 11:09:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:27.956 [2024-12-09 11:09:29.027022] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:58:27.956 [2024-12-09 11:09:29.027084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:28.216 [2024-12-09 11:09:29.142338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:58:28.216 [2024-12-09 11:09:29.198068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:28.216 [2024-12-09 11:09:29.198118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:28.216 [2024-12-09 11:09:29.198135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:28.216 [2024-12-09 11:09:29.198149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:28.216 [2024-12-09 11:09:29.198160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:28.216 [2024-12-09 11:09:29.199984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:28.216 [2024-12-09 11:09:29.200071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:58:28.216 [2024-12-09 11:09:29.200163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:58:28.216 [2024-12-09 11:09:29.200167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:28.216 [2024-12-09 11:09:29.322967] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:28.216 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:28.479 Malloc0 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:28.479 [2024-12-09 11:09:29.446328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:28.479 [ 00:58:28.479 { 00:58:28.479 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:58:28.479 "subtype": "Discovery", 00:58:28.479 "listen_addresses": [ 00:58:28.479 { 00:58:28.479 "trtype": "TCP", 00:58:28.479 "adrfam": "IPv4", 00:58:28.479 "traddr": "10.0.0.2", 00:58:28.479 "trsvcid": "4420" 00:58:28.479 } 00:58:28.479 ], 00:58:28.479 "allow_any_host": true, 00:58:28.479 "hosts": [] 00:58:28.479 }, 00:58:28.479 { 00:58:28.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:58:28.479 "subtype": "NVMe", 00:58:28.479 "listen_addresses": [ 00:58:28.479 { 00:58:28.479 "trtype": "TCP", 00:58:28.479 "adrfam": "IPv4", 00:58:28.479 "traddr": "10.0.0.2", 00:58:28.479 "trsvcid": "4420" 00:58:28.479 } 00:58:28.479 ], 00:58:28.479 "allow_any_host": true, 00:58:28.479 "hosts": [], 00:58:28.479 "serial_number": "SPDK00000000000001", 00:58:28.479 "model_number": "SPDK bdev Controller", 00:58:28.479 "max_namespaces": 32, 00:58:28.479 "min_cntlid": 1, 00:58:28.479 "max_cntlid": 65519, 00:58:28.479 "namespaces": [ 00:58:28.479 { 00:58:28.479 "nsid": 1, 00:58:28.479 "bdev_name": "Malloc0", 00:58:28.479 "name": "Malloc0", 00:58:28.479 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:58:28.479 "eui64": "ABCDEF0123456789", 00:58:28.479 "uuid": "cc659da9-2a6c-4537-ba58-736b70b0ab4a" 00:58:28.479 } 00:58:28.479 ] 00:58:28.479 } 00:58:28.479 ] 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:28.479 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:58:28.479 [2024-12-09 11:09:29.509152] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:58:28.479 [2024-12-09 11:09:29.509204] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485981 ] 00:58:28.479 [2024-12-09 11:09:29.580911] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:58:28.479 [2024-12-09 11:09:29.580985] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:58:28.479 [2024-12-09 11:09:29.580995] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:58:28.479 [2024-12-09 11:09:29.581018] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:58:28.479 [2024-12-09 11:09:29.581033] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:58:28.479 [2024-12-09 11:09:29.584931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:58:28.479 [2024-12-09 11:09:29.584989] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16a7690 0 00:58:28.479 [2024-12-09 11:09:29.592660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:58:28.479 [2024-12-09 11:09:29.592680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:58:28.479 [2024-12-09 11:09:29.592696] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:58:28.479 [2024-12-09 11:09:29.592704] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:58:28.479 [2024-12-09 11:09:29.592757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.479 [2024-12-09 11:09:29.592767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.479 [2024-12-09 11:09:29.592775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16a7690) 00:58:28.479 [2024-12-09 11:09:29.592792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:58:28.479 [2024-12-09 11:09:29.592820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709100, cid 0, qid 0 00:58:28.479 [2024-12-09 11:09:29.600656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.479 [2024-12-09 11:09:29.600670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.479 [2024-12-09 11:09:29.600678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.479 [2024-12-09 11:09:29.600686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709100) on tqpair=0x16a7690 00:58:28.479 [2024-12-09 11:09:29.600702] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:58:28.479 [2024-12-09 11:09:29.600712] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:58:28.479 [2024-12-09 11:09:29.600723] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:58:28.479 [2024-12-09 11:09:29.600748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.479 [2024-12-09 11:09:29.600756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.479 [2024-12-09 11:09:29.600764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16a7690) 00:58:28.480 [2024-12-09 11:09:29.600776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.480 [2024-12-09 11:09:29.600799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709100, cid 0, qid 0 00:58:28.480 [2024-12-09 11:09:29.600973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.480 [2024-12-09 11:09:29.600984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.480 [2024-12-09 11:09:29.600991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.600999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709100) on tqpair=0x16a7690 00:58:28.480 [2024-12-09 11:09:29.601012] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:58:28.480 [2024-12-09 11:09:29.601031] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:58:28.480 [2024-12-09 11:09:29.601043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16a7690) 00:58:28.480 [2024-12-09 11:09:29.601069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.480 [2024-12-09 11:09:29.601089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709100, cid 0, qid 0 00:58:28.480 [2024-12-09 11:09:29.601159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.480 [2024-12-09 11:09:29.601169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.480 [2024-12-09 11:09:29.601177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709100) on tqpair=0x16a7690 00:58:28.480 [2024-12-09 11:09:29.601193] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:58:28.480 [2024-12-09 11:09:29.601208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:58:28.480 [2024-12-09 11:09:29.601221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16a7690) 00:58:28.480 [2024-12-09 11:09:29.601246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.480 [2024-12-09 11:09:29.601265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709100, cid 0, qid 0 00:58:28.480 [2024-12-09 11:09:29.601343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.480 [2024-12-09 11:09:29.601353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.480 [2024-12-09 11:09:29.601361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709100) on tqpair=0x16a7690 00:58:28.480 [2024-12-09 11:09:29.601377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:58:28.480 [2024-12-09 11:09:29.601394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16a7690) 00:58:28.480 [2024-12-09 11:09:29.601420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.480 [2024-12-09 11:09:29.601438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709100, cid 0, qid 0 00:58:28.480 [2024-12-09 11:09:29.601519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.480 [2024-12-09 11:09:29.601530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.480 [2024-12-09 11:09:29.601537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709100) on tqpair=0x16a7690 00:58:28.480 [2024-12-09 11:09:29.601553] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:58:28.480 [2024-12-09 11:09:29.601563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:58:28.480 [2024-12-09 11:09:29.601577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:58:28.480 [2024-12-09 11:09:29.601690] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:58:28.480 [2024-12-09 11:09:29.601701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:58:28.480 [2024-12-09 11:09:29.601714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16a7690) 00:58:28.480 [2024-12-09 11:09:29.601739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.480 [2024-12-09 11:09:29.601758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709100, cid 0, qid 0 00:58:28.480 [2024-12-09 11:09:29.601832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.480 [2024-12-09 11:09:29.601843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.480 [2024-12-09 11:09:29.601850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709100) on tqpair=0x16a7690 00:58:28.480 [2024-12-09 11:09:29.601866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:58:28.480 [2024-12-09 11:09:29.601883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.601898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16a7690) 00:58:28.480 [2024-12-09 11:09:29.601909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.480 [2024-12-09 11:09:29.601927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709100, cid 0, qid 0 00:58:28.480 [2024-12-09 11:09:29.602016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.480 [2024-12-09 11:09:29.602026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.480 [2024-12-09 11:09:29.602034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.602041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709100) on tqpair=0x16a7690 00:58:28.480 [2024-12-09 11:09:29.602050] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:58:28.480 [2024-12-09 11:09:29.602059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:58:28.480 [2024-12-09 11:09:29.602074] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:58:28.480 [2024-12-09 11:09:29.602096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:58:28.480 [2024-12-09 11:09:29.602111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.602119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16a7690) 00:58:28.480 [2024-12-09 11:09:29.602130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.480 [2024-12-09 11:09:29.602149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709100, cid 0, qid 0 00:58:28.480 [2024-12-09 11:09:29.602251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:28.480 [2024-12-09 11:09:29.602262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:28.480 [2024-12-09 11:09:29.602275] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.602283] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16a7690): datao=0, datal=4096, cccid=0 00:58:28.480 [2024-12-09 11:09:29.602293] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1709100) on tqpair(0x16a7690): expected_datao=0, payload_size=4096 00:58:28.480 [2024-12-09 11:09:29.602302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.602321] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.602329] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.645657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.480 [2024-12-09 11:09:29.645672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.480 [2024-12-09 11:09:29.645680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.645688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709100) on tqpair=0x16a7690 00:58:28.480 [2024-12-09 11:09:29.645706] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:58:28.480 [2024-12-09 11:09:29.645716] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:58:28.480 [2024-12-09 11:09:29.645726] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:58:28.480 [2024-12-09 11:09:29.645735] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:58:28.480 [2024-12-09 11:09:29.645745] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:58:28.480 [2024-12-09 11:09:29.645755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:58:28.480 [2024-12-09 11:09:29.645770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:58:28.480 [2024-12-09 11:09:29.645783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.645791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.645799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16a7690) 00:58:28.480 [2024-12-09 11:09:29.645810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:58:28.480 [2024-12-09 11:09:29.645832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709100, cid 0, qid 0 00:58:28.480 [2024-12-09 11:09:29.645983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.480 [2024-12-09 11:09:29.645995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.480 [2024-12-09 11:09:29.646002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.646009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709100) on tqpair=0x16a7690 00:58:28.480 [2024-12-09 11:09:29.646021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.480 [2024-12-09 11:09:29.646028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16a7690) 00:58:28.481 [2024-12-09 11:09:29.646046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:58:28.481 [2024-12-09 11:09:29.646057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16a7690) 00:58:28.481 [2024-12-09 11:09:29.646082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:58:28.481 [2024-12-09 11:09:29.646096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16a7690) 00:58:28.481 [2024-12-09 11:09:29.646121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:58:28.481 [2024-12-09 11:09:29.646133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.481 [2024-12-09 11:09:29.646157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:58:28.481 [2024-12-09 11:09:29.646167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:58:28.481 [2024-12-09 11:09:29.646186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:58:28.481 [2024-12-09 11:09:29.646198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16a7690) 00:58:28.481 [2024-12-09 11:09:29.646217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.481 [2024-12-09 11:09:29.646237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709100, cid 0, qid 0 00:58:28.481 [2024-12-09 11:09:29.646247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709280, cid 1, qid 0 00:58:28.481 [2024-12-09 11:09:29.646256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709400, cid 2, qid 0 00:58:28.481 [2024-12-09 11:09:29.646264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.481 [2024-12-09 11:09:29.646273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709700, cid 4, qid 0 00:58:28.481 [2024-12-09 11:09:29.646442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.481 [2024-12-09 11:09:29.646453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.481 [2024-12-09 11:09:29.646460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709700) on tqpair=0x16a7690 00:58:28.481 [2024-12-09 11:09:29.646477] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:58:28.481 [2024-12-09 11:09:29.646487] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:58:28.481 [2024-12-09 11:09:29.646505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16a7690) 00:58:28.481 [2024-12-09 11:09:29.646524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.481 [2024-12-09 11:09:29.646542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709700, cid 4, qid 0 00:58:28.481 [2024-12-09 11:09:29.646631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:28.481 [2024-12-09 11:09:29.646642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:28.481 [2024-12-09 11:09:29.646655] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646663] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16a7690): datao=0, datal=4096, cccid=4 00:58:28.481 [2024-12-09 11:09:29.646678] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1709700) on tqpair(0x16a7690): expected_datao=0, payload_size=4096 00:58:28.481 [2024-12-09 11:09:29.646687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646698] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646706] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.481 [2024-12-09 11:09:29.646730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.481 [2024-12-09 11:09:29.646737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709700) on tqpair=0x16a7690 00:58:28.481 [2024-12-09 11:09:29.646766] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:58:28.481 [2024-12-09 11:09:29.646799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16a7690) 00:58:28.481 [2024-12-09 11:09:29.646819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.481 [2024-12-09 11:09:29.646831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.646846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16a7690) 00:58:28.481 [2024-12-09 11:09:29.646856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:58:28.481 [2024-12-09 11:09:29.646881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709700, cid 4, qid 0 00:58:28.481 [2024-12-09 11:09:29.646891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709880, cid 5, qid 0 00:58:28.481 [2024-12-09 11:09:29.647006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:28.481 [2024-12-09 11:09:29.647017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:28.481 [2024-12-09 11:09:29.647024] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.647031] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16a7690): datao=0, datal=1024, cccid=4 00:58:28.481 [2024-12-09 11:09:29.647041] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1709700) on tqpair(0x16a7690): expected_datao=0, payload_size=1024 00:58:28.481 [2024-12-09 11:09:29.647050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.647061] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.647068] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.647078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.481 [2024-12-09 11:09:29.647088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.481 [2024-12-09 11:09:29.647095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.481 [2024-12-09 11:09:29.647103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709880) on tqpair=0x16a7690 00:58:28.744 [2024-12-09 11:09:29.687882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.744 [2024-12-09 11:09:29.687900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.744 [2024-12-09 11:09:29.687908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.687916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709700) on tqpair=0x16a7690 00:58:28.744 [2024-12-09 11:09:29.687936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.687944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16a7690) 00:58:28.744 [2024-12-09 11:09:29.687955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.744 [2024-12-09 11:09:29.687989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709700, cid 4, qid 0 00:58:28.744 [2024-12-09 11:09:29.688079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:28.744 [2024-12-09 11:09:29.688090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:28.744 [2024-12-09 11:09:29.688097] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.688105] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16a7690): datao=0, datal=3072, cccid=4 00:58:28.744 [2024-12-09 11:09:29.688114] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1709700) on tqpair(0x16a7690): expected_datao=0, payload_size=3072 00:58:28.744 [2024-12-09 11:09:29.688123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.688145] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.688153] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.732663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.744 [2024-12-09 11:09:29.732680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.744 [2024-12-09 11:09:29.732687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.732695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709700) on tqpair=0x16a7690 00:58:28.744 [2024-12-09 11:09:29.732713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.732721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16a7690) 00:58:28.744 [2024-12-09 11:09:29.732733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.744 [2024-12-09 11:09:29.732761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709700, cid 4, qid 0 00:58:28.744 [2024-12-09 11:09:29.732843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:28.744 [2024-12-09 11:09:29.732855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:28.744 [2024-12-09 11:09:29.732862] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.732870] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16a7690): datao=0, datal=8, cccid=4 00:58:28.744 [2024-12-09 11:09:29.732880] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1709700) on tqpair(0x16a7690): expected_datao=0, payload_size=8 00:58:28.744 [2024-12-09 11:09:29.732889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.732901] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.732908] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.773801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.744 [2024-12-09 11:09:29.773816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.744 [2024-12-09 11:09:29.773824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.744 [2024-12-09 11:09:29.773832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709700) on tqpair=0x16a7690 00:58:28.744 ===================================================== 00:58:28.744 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:58:28.744 ===================================================== 00:58:28.744 Controller Capabilities/Features 00:58:28.744 ================================ 00:58:28.744 Vendor ID: 0000 00:58:28.744 Subsystem Vendor ID: 0000 00:58:28.744 Serial Number: .................... 00:58:28.744 Model Number: ........................................ 00:58:28.744 Firmware Version: 25.01 00:58:28.744 Recommended Arb Burst: 0 00:58:28.744 IEEE OUI Identifier: 00 00 00 00:58:28.744 Multi-path I/O 00:58:28.744 May have multiple subsystem ports: No 00:58:28.744 May have multiple controllers: No 00:58:28.744 Associated with SR-IOV VF: No 00:58:28.744 Max Data Transfer Size: 131072 00:58:28.744 Max Number of Namespaces: 0 00:58:28.744 Max Number of I/O Queues: 1024 00:58:28.744 NVMe Specification Version (VS): 1.3 00:58:28.744 NVMe Specification Version (Identify): 1.3 00:58:28.744 Maximum Queue Entries: 128 00:58:28.744 Contiguous Queues Required: Yes 00:58:28.744 Arbitration Mechanisms Supported 00:58:28.744 Weighted Round Robin: Not Supported 00:58:28.744 Vendor Specific: Not Supported 00:58:28.744 Reset Timeout: 15000 ms 00:58:28.744 Doorbell Stride: 4 bytes 00:58:28.744 NVM Subsystem Reset: Not Supported 00:58:28.744 Command Sets Supported 00:58:28.744 NVM Command Set: Supported 00:58:28.744 Boot Partition: Not Supported 00:58:28.744 Memory Page Size Minimum: 4096 bytes 00:58:28.744 Memory Page Size Maximum: 4096 bytes 00:58:28.744 Persistent Memory Region: Not Supported 00:58:28.744 Optional Asynchronous Events Supported 00:58:28.744 Namespace Attribute Notices: Not Supported 00:58:28.744 Firmware Activation Notices: Not Supported 00:58:28.744 ANA Change Notices: Not Supported 00:58:28.744 PLE Aggregate Log Change Notices: Not Supported 00:58:28.744 LBA Status Info Alert Notices: Not Supported 00:58:28.744 EGE Aggregate Log Change Notices: Not Supported 00:58:28.744 Normal NVM Subsystem Shutdown event: Not Supported 00:58:28.744 Zone Descriptor Change Notices: Not Supported 00:58:28.744 Discovery Log Change Notices: Supported 00:58:28.744 Controller Attributes 00:58:28.744 128-bit Host Identifier: Not Supported 00:58:28.744 Non-Operational Permissive Mode: Not Supported 00:58:28.744 NVM Sets: Not Supported 00:58:28.744 Read Recovery Levels: Not Supported 00:58:28.744 Endurance Groups: Not Supported 00:58:28.744 Predictable Latency Mode: Not Supported 00:58:28.744 Traffic Based Keep ALive: Not Supported 00:58:28.744 Namespace Granularity: Not Supported 00:58:28.744 SQ Associations: Not Supported 00:58:28.744 UUID List: Not Supported 00:58:28.744 Multi-Domain Subsystem: Not Supported 00:58:28.744 Fixed Capacity Management: Not Supported 00:58:28.744 Variable Capacity Management: Not Supported 00:58:28.744 Delete Endurance Group: Not Supported 00:58:28.744 Delete NVM Set: Not Supported 00:58:28.744 Extended LBA Formats Supported: Not Supported 00:58:28.744 Flexible Data Placement Supported: Not Supported 00:58:28.744 00:58:28.744 Controller Memory Buffer Support 00:58:28.744 ================================ 00:58:28.744 Supported: No 00:58:28.744 00:58:28.744 Persistent Memory Region Support 00:58:28.744 ================================ 00:58:28.744 Supported: No 00:58:28.744 00:58:28.744 Admin Command Set Attributes 00:58:28.744 ============================ 00:58:28.744 Security Send/Receive: Not Supported 00:58:28.744 Format NVM: Not Supported 00:58:28.744 Firmware Activate/Download: Not Supported 00:58:28.744 Namespace Management: Not Supported 00:58:28.744 Device Self-Test: Not Supported 00:58:28.744 Directives: Not Supported 00:58:28.744 NVMe-MI: Not Supported 00:58:28.744 Virtualization Management: Not Supported 00:58:28.744 Doorbell Buffer Config: Not Supported 00:58:28.744 Get LBA Status Capability: Not Supported 00:58:28.744 Command & Feature Lockdown Capability: Not Supported 00:58:28.744 Abort Command Limit: 1 00:58:28.744 Async Event Request Limit: 4 00:58:28.744 Number of Firmware Slots: N/A 00:58:28.744 Firmware Slot 1 Read-Only: N/A 00:58:28.744 Firmware Activation Without Reset: N/A 00:58:28.744 Multiple Update Detection Support: N/A 00:58:28.744 Firmware Update Granularity: No Information Provided 00:58:28.744 Per-Namespace SMART Log: No 00:58:28.744 Asymmetric Namespace Access Log Page: Not Supported 00:58:28.744 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:58:28.744 Command Effects Log Page: Not Supported 00:58:28.744 Get Log Page Extended Data: Supported 00:58:28.744 Telemetry Log Pages: Not Supported 00:58:28.744 Persistent Event Log Pages: Not Supported 00:58:28.744 Supported Log Pages Log Page: May Support 00:58:28.744 Commands Supported & Effects Log Page: Not Supported 00:58:28.744 Feature Identifiers & Effects Log Page:May Support 00:58:28.744 NVMe-MI Commands & Effects Log Page: May Support 00:58:28.744 Data Area 4 for Telemetry Log: Not Supported 00:58:28.744 Error Log Page Entries Supported: 128 00:58:28.744 Keep Alive: Not Supported 00:58:28.744 00:58:28.744 NVM Command Set Attributes 00:58:28.744 ========================== 00:58:28.744 Submission Queue Entry Size 00:58:28.744 Max: 1 00:58:28.744 Min: 1 00:58:28.744 Completion Queue Entry Size 00:58:28.744 Max: 1 00:58:28.744 Min: 1 00:58:28.744 Number of Namespaces: 0 00:58:28.744 Compare Command: Not Supported 00:58:28.744 Write Uncorrectable Command: Not Supported 00:58:28.744 Dataset Management Command: Not Supported 00:58:28.744 Write Zeroes Command: Not Supported 00:58:28.744 Set Features Save Field: Not Supported 00:58:28.744 Reservations: Not Supported 00:58:28.744 Timestamp: Not Supported 00:58:28.744 Copy: Not Supported 00:58:28.744 Volatile Write Cache: Not Present 00:58:28.744 Atomic Write Unit (Normal): 1 00:58:28.744 Atomic Write Unit (PFail): 1 00:58:28.744 Atomic Compare & Write Unit: 1 00:58:28.744 Fused Compare & Write: Supported 00:58:28.744 Scatter-Gather List 00:58:28.744 SGL Command Set: Supported 00:58:28.744 SGL Keyed: Supported 00:58:28.744 SGL Bit Bucket Descriptor: Not Supported 00:58:28.744 SGL Metadata Pointer: Not Supported 00:58:28.744 Oversized SGL: Not Supported 00:58:28.744 SGL Metadata Address: Not Supported 00:58:28.744 SGL Offset: Supported 00:58:28.745 Transport SGL Data Block: Not Supported 00:58:28.745 Replay Protected Memory Block: Not Supported 00:58:28.745 00:58:28.745 Firmware Slot Information 00:58:28.745 ========================= 00:58:28.745 Active slot: 0 00:58:28.745 00:58:28.745 00:58:28.745 Error Log 00:58:28.745 ========= 00:58:28.745 00:58:28.745 Active Namespaces 00:58:28.745 ================= 00:58:28.745 Discovery Log Page 00:58:28.745 ================== 00:58:28.745 Generation Counter: 2 00:58:28.745 Number of Records: 2 00:58:28.745 Record Format: 0 00:58:28.745 00:58:28.745 Discovery Log Entry 0 00:58:28.745 ---------------------- 00:58:28.745 Transport Type: 3 (TCP) 00:58:28.745 Address Family: 1 (IPv4) 00:58:28.745 Subsystem Type: 3 (Current Discovery Subsystem) 00:58:28.745 Entry Flags: 00:58:28.745 Duplicate Returned Information: 1 00:58:28.745 Explicit Persistent Connection Support for Discovery: 1 00:58:28.745 Transport Requirements: 00:58:28.745 Secure Channel: Not Required 00:58:28.745 Port ID: 0 (0x0000) 00:58:28.745 Controller ID: 65535 (0xffff) 00:58:28.745 Admin Max SQ Size: 128 00:58:28.745 Transport Service Identifier: 4420 00:58:28.745 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:58:28.745 Transport Address: 10.0.0.2 00:58:28.745 Discovery Log Entry 1 00:58:28.745 ---------------------- 00:58:28.745 Transport Type: 3 (TCP) 00:58:28.745 Address Family: 1 (IPv4) 00:58:28.745 Subsystem Type: 2 (NVM Subsystem) 00:58:28.745 Entry Flags: 00:58:28.745 Duplicate Returned Information: 0 00:58:28.745 Explicit Persistent Connection Support for Discovery: 0 00:58:28.745 Transport Requirements: 00:58:28.745 Secure Channel: Not Required 00:58:28.745 Port ID: 0 (0x0000) 00:58:28.745 Controller ID: 65535 (0xffff) 00:58:28.745 Admin Max SQ Size: 128 00:58:28.745 Transport Service Identifier: 4420 00:58:28.745 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:58:28.745 Transport Address: 10.0.0.2 [2024-12-09 11:09:29.773962] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:58:28.745 [2024-12-09 11:09:29.773981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709100) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.773992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:28.745 [2024-12-09 11:09:29.774002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709280) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.774011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:28.745 [2024-12-09 11:09:29.774021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709400) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.774032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:28.745 [2024-12-09 11:09:29.774042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.774050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:28.745 [2024-12-09 11:09:29.774069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.745 [2024-12-09 11:09:29.774096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.745 [2024-12-09 11:09:29.774120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.745 [2024-12-09 11:09:29.774199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.745 [2024-12-09 11:09:29.774211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.745 [2024-12-09 11:09:29.774218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.774238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.745 [2024-12-09 11:09:29.774264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.745 [2024-12-09 11:09:29.774287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.745 [2024-12-09 11:09:29.774370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.745 [2024-12-09 11:09:29.774381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.745 [2024-12-09 11:09:29.774389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.774405] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:58:28.745 [2024-12-09 11:09:29.774414] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:58:28.745 [2024-12-09 11:09:29.774431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.745 [2024-12-09 11:09:29.774457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.745 [2024-12-09 11:09:29.774476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.745 [2024-12-09 11:09:29.774549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.745 [2024-12-09 11:09:29.774559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.745 [2024-12-09 11:09:29.774567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.774590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.745 [2024-12-09 11:09:29.774618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.745 [2024-12-09 11:09:29.774637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.745 [2024-12-09 11:09:29.774708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.745 [2024-12-09 11:09:29.774719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.745 [2024-12-09 11:09:29.774726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.774749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.745 [2024-12-09 11:09:29.774775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.745 [2024-12-09 11:09:29.774793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.745 [2024-12-09 11:09:29.774874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.745 [2024-12-09 11:09:29.774885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.745 [2024-12-09 11:09:29.774892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.774916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.774931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.745 [2024-12-09 11:09:29.774942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.745 [2024-12-09 11:09:29.774960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.745 [2024-12-09 11:09:29.775025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.745 [2024-12-09 11:09:29.775036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.745 [2024-12-09 11:09:29.775043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.775050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.775065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.775073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.775080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.745 [2024-12-09 11:09:29.775091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.745 [2024-12-09 11:09:29.775109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.745 [2024-12-09 11:09:29.775182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.745 [2024-12-09 11:09:29.775193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.745 [2024-12-09 11:09:29.775200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.775207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.745 [2024-12-09 11:09:29.775222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.775230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.745 [2024-12-09 11:09:29.775237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.745 [2024-12-09 11:09:29.775248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.745 [2024-12-09 11:09:29.775269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.775337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.775348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.775355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.775377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.775403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.775421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.775490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.775501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.775508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.775530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.775556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.775575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.775660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.775671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.775678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.775701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.775727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.775745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.775818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.775829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.775836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.775858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.775884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.775902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.775974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.775985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.775992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.775999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.776014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.776040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.776058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.776135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.776145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.776152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.776175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.776201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.776219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.776294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.776304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.776312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.776334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.776360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.776378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.776447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.776457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.776464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.776487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.776513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.776531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.776599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.776612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.776619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.776641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.776673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.776691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.776794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.776805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.776812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.776836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.776862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.776880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.776949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.776960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.776967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.776989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.776997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.777004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.777015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.777033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.777101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.777112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.746 [2024-12-09 11:09:29.777119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.777127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.746 [2024-12-09 11:09:29.777141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.777149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.746 [2024-12-09 11:09:29.777156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.746 [2024-12-09 11:09:29.777167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.746 [2024-12-09 11:09:29.777185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.746 [2024-12-09 11:09:29.777258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.746 [2024-12-09 11:09:29.777268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.747 [2024-12-09 11:09:29.777278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.747 [2024-12-09 11:09:29.777286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.747 [2024-12-09 11:09:29.777301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.747 [2024-12-09 11:09:29.777309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.747 [2024-12-09 11:09:29.777316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.747 [2024-12-09 11:09:29.777327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.747 [2024-12-09 11:09:29.777345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.747 [2024-12-09 11:09:29.777413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.747 [2024-12-09 11:09:29.777424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.747 [2024-12-09 11:09:29.777431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.747 [2024-12-09 11:09:29.777439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.747 [2024-12-09 11:09:29.777454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.747 [2024-12-09 11:09:29.777462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.747 [2024-12-09 11:09:29.777469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.747 [2024-12-09 11:09:29.777480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.747 [2024-12-09 11:09:29.777497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.747 [2024-12-09 11:09:29.777570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.747 [2024-12-09 11:09:29.777580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.747 [2024-12-09 11:09:29.777588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.747 [2024-12-09 11:09:29.777595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.747 [2024-12-09 11:09:29.777611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:28.747 [2024-12-09 11:09:29.777619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:28.747 [2024-12-09 11:09:29.777626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16a7690) 00:58:28.747 [2024-12-09 11:09:29.777637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.747 [2024-12-09 11:09:29.781666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709580, cid 3, qid 0 00:58:28.747 [2024-12-09 11:09:29.781827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:28.747 [2024-12-09 11:09:29.781838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:28.747 [2024-12-09 11:09:29.781845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:28.747 [2024-12-09 11:09:29.781853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709580) on tqpair=0x16a7690 00:58:28.747 [2024-12-09 11:09:29.781867] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:58:29.010 00:58:29.010 11:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:58:29.010 [2024-12-09 11:09:29.949432] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:58:29.010 [2024-12-09 11:09:29.949480] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485995 ] 00:58:29.010 [2024-12-09 11:09:30.018959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:58:29.010 [2024-12-09 11:09:30.019032] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:58:29.010 [2024-12-09 11:09:30.019042] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:58:29.010 [2024-12-09 11:09:30.019064] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:58:29.010 [2024-12-09 11:09:30.019080] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:58:29.010 [2024-12-09 11:09:30.019625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:58:29.011 [2024-12-09 11:09:30.019679] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1568690 0 00:58:29.011 [2024-12-09 11:09:30.029658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:58:29.011 [2024-12-09 11:09:30.029682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:58:29.011 [2024-12-09 11:09:30.029695] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:58:29.011 [2024-12-09 11:09:30.029703] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:58:29.011 [2024-12-09 11:09:30.029751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.029762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.029771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1568690) 00:58:29.011 [2024-12-09 11:09:30.029789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:58:29.011 [2024-12-09 11:09:30.029817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca100, cid 0, qid 0 00:58:29.011 [2024-12-09 11:09:30.036658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.011 [2024-12-09 11:09:30.036674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.011 [2024-12-09 11:09:30.036682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.036691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca100) on tqpair=0x1568690 00:58:29.011 [2024-12-09 11:09:30.036708] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:58:29.011 [2024-12-09 11:09:30.036721] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:58:29.011 [2024-12-09 11:09:30.036733] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:58:29.011 [2024-12-09 11:09:30.036758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.036767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.036776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1568690) 00:58:29.011 [2024-12-09 11:09:30.036790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.011 [2024-12-09 11:09:30.036815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca100, cid 0, qid 0 00:58:29.011 [2024-12-09 11:09:30.036912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.011 [2024-12-09 11:09:30.036924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.011 [2024-12-09 11:09:30.036933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.036942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca100) on tqpair=0x1568690 00:58:29.011 [2024-12-09 11:09:30.036956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:58:29.011 [2024-12-09 11:09:30.036973] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:58:29.011 [2024-12-09 11:09:30.036990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.036998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1568690) 00:58:29.011 [2024-12-09 11:09:30.037018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.011 [2024-12-09 11:09:30.037039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca100, cid 0, qid 0 00:58:29.011 [2024-12-09 11:09:30.037110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.011 [2024-12-09 11:09:30.037122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.011 [2024-12-09 11:09:30.037130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca100) on tqpair=0x1568690 00:58:29.011 [2024-12-09 11:09:30.037149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:58:29.011 [2024-12-09 11:09:30.037165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:58:29.011 [2024-12-09 11:09:30.037178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1568690) 00:58:29.011 [2024-12-09 11:09:30.037205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.011 [2024-12-09 11:09:30.037225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca100, cid 0, qid 0 00:58:29.011 [2024-12-09 11:09:30.037292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.011 [2024-12-09 11:09:30.037304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.011 [2024-12-09 11:09:30.037312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca100) on tqpair=0x1568690 00:58:29.011 [2024-12-09 11:09:30.037332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:58:29.011 [2024-12-09 11:09:30.037349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1568690) 00:58:29.011 [2024-12-09 11:09:30.037377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.011 [2024-12-09 11:09:30.037396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca100, cid 0, qid 0 00:58:29.011 [2024-12-09 11:09:30.037468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.011 [2024-12-09 11:09:30.037480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.011 [2024-12-09 11:09:30.037487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca100) on tqpair=0x1568690 00:58:29.011 [2024-12-09 11:09:30.037505] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:58:29.011 [2024-12-09 11:09:30.037516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:58:29.011 [2024-12-09 11:09:30.037532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:58:29.011 [2024-12-09 11:09:30.037650] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:58:29.011 [2024-12-09 11:09:30.037664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:58:29.011 [2024-12-09 11:09:30.037679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1568690) 00:58:29.011 [2024-12-09 11:09:30.037708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.011 [2024-12-09 11:09:30.037728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca100, cid 0, qid 0 00:58:29.011 [2024-12-09 11:09:30.037800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.011 [2024-12-09 11:09:30.037812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.011 [2024-12-09 11:09:30.037820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca100) on tqpair=0x1568690 00:58:29.011 [2024-12-09 11:09:30.037837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:58:29.011 [2024-12-09 11:09:30.037855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.037873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1568690) 00:58:29.011 [2024-12-09 11:09:30.037885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.011 [2024-12-09 11:09:30.037903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca100, cid 0, qid 0 00:58:29.011 [2024-12-09 11:09:30.037978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.011 [2024-12-09 11:09:30.037989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.011 [2024-12-09 11:09:30.037997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.038005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca100) on tqpair=0x1568690 00:58:29.011 [2024-12-09 11:09:30.038013] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:58:29.011 [2024-12-09 11:09:30.038023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:58:29.011 [2024-12-09 11:09:30.038038] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:58:29.011 [2024-12-09 11:09:30.038056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:58:29.011 [2024-12-09 11:09:30.038070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.038079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1568690) 00:58:29.011 [2024-12-09 11:09:30.038090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.011 [2024-12-09 11:09:30.038109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca100, cid 0, qid 0 00:58:29.011 [2024-12-09 11:09:30.038220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:29.011 [2024-12-09 11:09:30.038232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:29.011 [2024-12-09 11:09:30.038239] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:29.011 [2024-12-09 11:09:30.038247] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1568690): datao=0, datal=4096, cccid=0 00:58:29.011 [2024-12-09 11:09:30.038260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ca100) on tqpair(0x1568690): expected_datao=0, payload_size=4096 00:58:29.011 [2024-12-09 11:09:30.038269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.038281] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.038289] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.078731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.012 [2024-12-09 11:09:30.078754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.012 [2024-12-09 11:09:30.078762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.078771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca100) on tqpair=0x1568690 00:58:29.012 [2024-12-09 11:09:30.078791] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:58:29.012 [2024-12-09 11:09:30.078802] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:58:29.012 [2024-12-09 11:09:30.078811] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:58:29.012 [2024-12-09 11:09:30.078819] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:58:29.012 [2024-12-09 11:09:30.078829] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:58:29.012 [2024-12-09 11:09:30.078840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.078856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.078870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.078878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.078886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1568690) 00:58:29.012 [2024-12-09 11:09:30.078899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:58:29.012 [2024-12-09 11:09:30.078922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca100, cid 0, qid 0 00:58:29.012 [2024-12-09 11:09:30.078999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.012 [2024-12-09 11:09:30.079010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.012 [2024-12-09 11:09:30.079017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca100) on tqpair=0x1568690 00:58:29.012 [2024-12-09 11:09:30.079036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1568690) 00:58:29.012 [2024-12-09 11:09:30.079062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:58:29.012 [2024-12-09 11:09:30.079073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1568690) 00:58:29.012 [2024-12-09 11:09:30.079098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:58:29.012 [2024-12-09 11:09:30.079109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1568690) 00:58:29.012 [2024-12-09 11:09:30.079137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:58:29.012 [2024-12-09 11:09:30.079148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1568690) 00:58:29.012 [2024-12-09 11:09:30.079173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:58:29.012 [2024-12-09 11:09:30.079183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.079202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.079214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1568690) 00:58:29.012 [2024-12-09 11:09:30.079233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.012 [2024-12-09 11:09:30.079253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca100, cid 0, qid 0 00:58:29.012 [2024-12-09 11:09:30.079263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca280, cid 1, qid 0 00:58:29.012 [2024-12-09 11:09:30.079271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca400, cid 2, qid 0 00:58:29.012 [2024-12-09 11:09:30.079280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca580, cid 3, qid 0 00:58:29.012 [2024-12-09 11:09:30.079289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca700, cid 4, qid 0 00:58:29.012 [2024-12-09 11:09:30.079393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.012 [2024-12-09 11:09:30.079404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.012 [2024-12-09 11:09:30.079411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca700) on tqpair=0x1568690 00:58:29.012 [2024-12-09 11:09:30.079427] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:58:29.012 [2024-12-09 11:09:30.079438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.079453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.079464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.079476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1568690) 00:58:29.012 [2024-12-09 11:09:30.079502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:58:29.012 [2024-12-09 11:09:30.079521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca700, cid 4, qid 0 00:58:29.012 [2024-12-09 11:09:30.079602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.012 [2024-12-09 11:09:30.079613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.012 [2024-12-09 11:09:30.079620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.079630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca700) on tqpair=0x1568690 00:58:29.012 [2024-12-09 11:09:30.083718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.083739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.083753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.083761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1568690) 00:58:29.012 [2024-12-09 11:09:30.083772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.012 [2024-12-09 11:09:30.083793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca700, cid 4, qid 0 00:58:29.012 [2024-12-09 11:09:30.083912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:29.012 [2024-12-09 11:09:30.083923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:29.012 [2024-12-09 11:09:30.083930] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.083937] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1568690): datao=0, datal=4096, cccid=4 00:58:29.012 [2024-12-09 11:09:30.083947] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ca700) on tqpair(0x1568690): expected_datao=0, payload_size=4096 00:58:29.012 [2024-12-09 11:09:30.083956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.083967] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.083975] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.083988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.012 [2024-12-09 11:09:30.083998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.012 [2024-12-09 11:09:30.084005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.084013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca700) on tqpair=0x1568690 00:58:29.012 [2024-12-09 11:09:30.084028] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:58:29.012 [2024-12-09 11:09:30.084053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.084070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:58:29.012 [2024-12-09 11:09:30.084083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.084091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1568690) 00:58:29.012 [2024-12-09 11:09:30.084102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.012 [2024-12-09 11:09:30.084121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca700, cid 4, qid 0 00:58:29.012 [2024-12-09 11:09:30.084228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:29.012 [2024-12-09 11:09:30.084240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:29.012 [2024-12-09 11:09:30.084247] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.084254] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1568690): datao=0, datal=4096, cccid=4 00:58:29.012 [2024-12-09 11:09:30.084264] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ca700) on tqpair(0x1568690): expected_datao=0, payload_size=4096 00:58:29.012 [2024-12-09 11:09:30.084273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.084283] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.084291] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.084308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.012 [2024-12-09 11:09:30.084318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.012 [2024-12-09 11:09:30.084325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.012 [2024-12-09 11:09:30.084333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca700) on tqpair=0x1568690 00:58:29.013 [2024-12-09 11:09:30.084351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:58:29.013 [2024-12-09 11:09:30.084368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:58:29.013 [2024-12-09 11:09:30.084381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1568690) 00:58:29.013 [2024-12-09 11:09:30.084399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.013 [2024-12-09 11:09:30.084418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca700, cid 4, qid 0 00:58:29.013 [2024-12-09 11:09:30.084509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:29.013 [2024-12-09 11:09:30.084519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:29.013 [2024-12-09 11:09:30.084526] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084534] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1568690): datao=0, datal=4096, cccid=4 00:58:29.013 [2024-12-09 11:09:30.084543] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ca700) on tqpair(0x1568690): expected_datao=0, payload_size=4096 00:58:29.013 [2024-12-09 11:09:30.084552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084563] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084570] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.013 [2024-12-09 11:09:30.084594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.013 [2024-12-09 11:09:30.084601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca700) on tqpair=0x1568690 00:58:29.013 [2024-12-09 11:09:30.084620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:58:29.013 [2024-12-09 11:09:30.084635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:58:29.013 [2024-12-09 11:09:30.084659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:58:29.013 [2024-12-09 11:09:30.084673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:58:29.013 [2024-12-09 11:09:30.084684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:58:29.013 [2024-12-09 11:09:30.084694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:58:29.013 [2024-12-09 11:09:30.084704] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:58:29.013 [2024-12-09 11:09:30.084713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:58:29.013 [2024-12-09 11:09:30.084724] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:58:29.013 [2024-12-09 11:09:30.084749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1568690) 00:58:29.013 [2024-12-09 11:09:30.084768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.013 [2024-12-09 11:09:30.084780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1568690) 00:58:29.013 [2024-12-09 11:09:30.084806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:58:29.013 [2024-12-09 11:09:30.084829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca700, cid 4, qid 0 00:58:29.013 [2024-12-09 11:09:30.084839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca880, cid 5, qid 0 00:58:29.013 [2024-12-09 11:09:30.084925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.013 [2024-12-09 11:09:30.084936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.013 [2024-12-09 11:09:30.084943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca700) on tqpair=0x1568690 00:58:29.013 [2024-12-09 11:09:30.084962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.013 [2024-12-09 11:09:30.084972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.013 [2024-12-09 11:09:30.084979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.084986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca880) on tqpair=0x1568690 00:58:29.013 [2024-12-09 11:09:30.085003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1568690) 00:58:29.013 [2024-12-09 11:09:30.085022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.013 [2024-12-09 11:09:30.085040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca880, cid 5, qid 0 00:58:29.013 [2024-12-09 11:09:30.085113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.013 [2024-12-09 11:09:30.085124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.013 [2024-12-09 11:09:30.085131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca880) on tqpair=0x1568690 00:58:29.013 [2024-12-09 11:09:30.085154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1568690) 00:58:29.013 [2024-12-09 11:09:30.085174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.013 [2024-12-09 11:09:30.085191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca880, cid 5, qid 0 00:58:29.013 [2024-12-09 11:09:30.085268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.013 [2024-12-09 11:09:30.085278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.013 [2024-12-09 11:09:30.085285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca880) on tqpair=0x1568690 00:58:29.013 [2024-12-09 11:09:30.085309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1568690) 00:58:29.013 [2024-12-09 11:09:30.085328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.013 [2024-12-09 11:09:30.085351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca880, cid 5, qid 0 00:58:29.013 [2024-12-09 11:09:30.085419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.013 [2024-12-09 11:09:30.085429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.013 [2024-12-09 11:09:30.085437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca880) on tqpair=0x1568690 00:58:29.013 [2024-12-09 11:09:30.085467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1568690) 00:58:29.013 [2024-12-09 11:09:30.085487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.013 [2024-12-09 11:09:30.085499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1568690) 00:58:29.013 [2024-12-09 11:09:30.085518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.013 [2024-12-09 11:09:30.085530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1568690) 00:58:29.013 [2024-12-09 11:09:30.085548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.013 [2024-12-09 11:09:30.085562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1568690) 00:58:29.013 [2024-12-09 11:09:30.085580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.013 [2024-12-09 11:09:30.085599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca880, cid 5, qid 0 00:58:29.013 [2024-12-09 11:09:30.085609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca700, cid 4, qid 0 00:58:29.013 [2024-12-09 11:09:30.085617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15caa00, cid 6, qid 0 00:58:29.013 [2024-12-09 11:09:30.085626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cab80, cid 7, qid 0 00:58:29.013 [2024-12-09 11:09:30.085763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:29.013 [2024-12-09 11:09:30.085775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:29.013 [2024-12-09 11:09:30.085782] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085789] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1568690): datao=0, datal=8192, cccid=5 00:58:29.013 [2024-12-09 11:09:30.085799] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ca880) on tqpair(0x1568690): expected_datao=0, payload_size=8192 00:58:29.013 [2024-12-09 11:09:30.085808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085830] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085838] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:29.013 [2024-12-09 11:09:30.085863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:29.013 [2024-12-09 11:09:30.085870] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085877] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1568690): datao=0, datal=512, cccid=4 00:58:29.013 [2024-12-09 11:09:30.085887] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ca700) on tqpair(0x1568690): expected_datao=0, payload_size=512 00:58:29.013 [2024-12-09 11:09:30.085900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085910] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085918] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:29.013 [2024-12-09 11:09:30.085927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:29.013 [2024-12-09 11:09:30.085937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:29.014 [2024-12-09 11:09:30.085944] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.085951] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1568690): datao=0, datal=512, cccid=6 00:58:29.014 [2024-12-09 11:09:30.085960] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15caa00) on tqpair(0x1568690): expected_datao=0, payload_size=512 00:58:29.014 [2024-12-09 11:09:30.085969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.085979] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.085987] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.085996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:58:29.014 [2024-12-09 11:09:30.086006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:58:29.014 [2024-12-09 11:09:30.086013] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.086020] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1568690): datao=0, datal=4096, cccid=7 00:58:29.014 [2024-12-09 11:09:30.086029] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15cab80) on tqpair(0x1568690): expected_datao=0, payload_size=4096 00:58:29.014 [2024-12-09 11:09:30.086038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.086049] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.086056] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.086070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.014 [2024-12-09 11:09:30.086080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.014 [2024-12-09 11:09:30.086087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.086094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca880) on tqpair=0x1568690 00:58:29.014 [2024-12-09 11:09:30.086114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.014 [2024-12-09 11:09:30.086125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.014 [2024-12-09 11:09:30.086132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.086139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca700) on tqpair=0x1568690 00:58:29.014 [2024-12-09 11:09:30.086157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.014 [2024-12-09 11:09:30.086167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.014 [2024-12-09 11:09:30.086174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.086182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15caa00) on tqpair=0x1568690 00:58:29.014 [2024-12-09 11:09:30.086194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.014 [2024-12-09 11:09:30.086204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.014 [2024-12-09 11:09:30.086211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.014 [2024-12-09 11:09:30.086218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cab80) on tqpair=0x1568690 00:58:29.014 ===================================================== 00:58:29.014 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:58:29.014 ===================================================== 00:58:29.014 Controller Capabilities/Features 00:58:29.014 ================================ 00:58:29.014 Vendor ID: 8086 00:58:29.014 Subsystem Vendor ID: 8086 00:58:29.014 Serial Number: SPDK00000000000001 00:58:29.014 Model Number: SPDK bdev Controller 00:58:29.014 Firmware Version: 25.01 00:58:29.014 Recommended Arb Burst: 6 00:58:29.014 IEEE OUI Identifier: e4 d2 5c 00:58:29.014 Multi-path I/O 00:58:29.014 May have multiple subsystem ports: Yes 00:58:29.014 May have multiple controllers: Yes 00:58:29.014 Associated with SR-IOV VF: No 00:58:29.014 Max Data Transfer Size: 131072 00:58:29.014 Max Number of Namespaces: 32 00:58:29.014 Max Number of I/O Queues: 127 00:58:29.014 NVMe Specification Version (VS): 1.3 00:58:29.014 NVMe Specification Version (Identify): 1.3 00:58:29.014 Maximum Queue Entries: 128 00:58:29.014 Contiguous Queues Required: Yes 00:58:29.014 Arbitration Mechanisms Supported 00:58:29.014 Weighted Round Robin: Not Supported 00:58:29.014 Vendor Specific: Not Supported 00:58:29.014 Reset Timeout: 15000 ms 00:58:29.014 Doorbell Stride: 4 bytes 00:58:29.014 NVM Subsystem Reset: Not Supported 00:58:29.014 Command Sets Supported 00:58:29.014 NVM Command Set: Supported 00:58:29.014 Boot Partition: Not Supported 00:58:29.014 Memory Page Size Minimum: 4096 bytes 00:58:29.014 Memory Page Size Maximum: 4096 bytes 00:58:29.014 Persistent Memory Region: Not Supported 00:58:29.014 Optional Asynchronous Events Supported 00:58:29.014 Namespace Attribute Notices: Supported 00:58:29.014 Firmware Activation Notices: Not Supported 00:58:29.014 ANA Change Notices: Not Supported 00:58:29.014 PLE Aggregate Log Change Notices: Not Supported 00:58:29.014 LBA Status Info Alert Notices: Not Supported 00:58:29.014 EGE Aggregate Log Change Notices: Not Supported 00:58:29.014 Normal NVM Subsystem Shutdown event: Not Supported 00:58:29.014 Zone Descriptor Change Notices: Not Supported 00:58:29.014 Discovery Log Change Notices: Not Supported 00:58:29.014 Controller Attributes 00:58:29.014 128-bit Host Identifier: Supported 00:58:29.014 Non-Operational Permissive Mode: Not Supported 00:58:29.014 NVM Sets: Not Supported 00:58:29.014 Read Recovery Levels: Not Supported 00:58:29.014 Endurance Groups: Not Supported 00:58:29.014 Predictable Latency Mode: Not Supported 00:58:29.014 Traffic Based Keep ALive: Not Supported 00:58:29.014 Namespace Granularity: Not Supported 00:58:29.014 SQ Associations: Not Supported 00:58:29.014 UUID List: Not Supported 00:58:29.014 Multi-Domain Subsystem: Not Supported 00:58:29.014 Fixed Capacity Management: Not Supported 00:58:29.014 Variable Capacity Management: Not Supported 00:58:29.014 Delete Endurance Group: Not Supported 00:58:29.014 Delete NVM Set: Not Supported 00:58:29.014 Extended LBA Formats Supported: Not Supported 00:58:29.014 Flexible Data Placement Supported: Not Supported 00:58:29.014 00:58:29.014 Controller Memory Buffer Support 00:58:29.014 ================================ 00:58:29.014 Supported: No 00:58:29.014 00:58:29.014 Persistent Memory Region Support 00:58:29.014 ================================ 00:58:29.014 Supported: No 00:58:29.014 00:58:29.014 Admin Command Set Attributes 00:58:29.014 ============================ 00:58:29.014 Security Send/Receive: Not Supported 00:58:29.014 Format NVM: Not Supported 00:58:29.014 Firmware Activate/Download: Not Supported 00:58:29.014 Namespace Management: Not Supported 00:58:29.014 Device Self-Test: Not Supported 00:58:29.014 Directives: Not Supported 00:58:29.014 NVMe-MI: Not Supported 00:58:29.014 Virtualization Management: Not Supported 00:58:29.014 Doorbell Buffer Config: Not Supported 00:58:29.014 Get LBA Status Capability: Not Supported 00:58:29.014 Command & Feature Lockdown Capability: Not Supported 00:58:29.014 Abort Command Limit: 4 00:58:29.014 Async Event Request Limit: 4 00:58:29.014 Number of Firmware Slots: N/A 00:58:29.014 Firmware Slot 1 Read-Only: N/A 00:58:29.014 Firmware Activation Without Reset: N/A 00:58:29.014 Multiple Update Detection Support: N/A 00:58:29.014 Firmware Update Granularity: No Information Provided 00:58:29.014 Per-Namespace SMART Log: No 00:58:29.014 Asymmetric Namespace Access Log Page: Not Supported 00:58:29.014 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:58:29.014 Command Effects Log Page: Supported 00:58:29.014 Get Log Page Extended Data: Supported 00:58:29.014 Telemetry Log Pages: Not Supported 00:58:29.014 Persistent Event Log Pages: Not Supported 00:58:29.014 Supported Log Pages Log Page: May Support 00:58:29.014 Commands Supported & Effects Log Page: Not Supported 00:58:29.014 Feature Identifiers & Effects Log Page:May Support 00:58:29.014 NVMe-MI Commands & Effects Log Page: May Support 00:58:29.014 Data Area 4 for Telemetry Log: Not Supported 00:58:29.014 Error Log Page Entries Supported: 128 00:58:29.014 Keep Alive: Supported 00:58:29.014 Keep Alive Granularity: 10000 ms 00:58:29.014 00:58:29.014 NVM Command Set Attributes 00:58:29.014 ========================== 00:58:29.014 Submission Queue Entry Size 00:58:29.014 Max: 64 00:58:29.014 Min: 64 00:58:29.014 Completion Queue Entry Size 00:58:29.014 Max: 16 00:58:29.014 Min: 16 00:58:29.014 Number of Namespaces: 32 00:58:29.014 Compare Command: Supported 00:58:29.014 Write Uncorrectable Command: Not Supported 00:58:29.014 Dataset Management Command: Supported 00:58:29.014 Write Zeroes Command: Supported 00:58:29.014 Set Features Save Field: Not Supported 00:58:29.014 Reservations: Supported 00:58:29.014 Timestamp: Not Supported 00:58:29.014 Copy: Supported 00:58:29.014 Volatile Write Cache: Present 00:58:29.014 Atomic Write Unit (Normal): 1 00:58:29.014 Atomic Write Unit (PFail): 1 00:58:29.014 Atomic Compare & Write Unit: 1 00:58:29.014 Fused Compare & Write: Supported 00:58:29.014 Scatter-Gather List 00:58:29.014 SGL Command Set: Supported 00:58:29.014 SGL Keyed: Supported 00:58:29.014 SGL Bit Bucket Descriptor: Not Supported 00:58:29.014 SGL Metadata Pointer: Not Supported 00:58:29.014 Oversized SGL: Not Supported 00:58:29.014 SGL Metadata Address: Not Supported 00:58:29.014 SGL Offset: Supported 00:58:29.014 Transport SGL Data Block: Not Supported 00:58:29.014 Replay Protected Memory Block: Not Supported 00:58:29.014 00:58:29.014 Firmware Slot Information 00:58:29.014 ========================= 00:58:29.015 Active slot: 1 00:58:29.015 Slot 1 Firmware Revision: 25.01 00:58:29.015 00:58:29.015 00:58:29.015 Commands Supported and Effects 00:58:29.015 ============================== 00:58:29.015 Admin Commands 00:58:29.015 -------------- 00:58:29.015 Get Log Page (02h): Supported 00:58:29.015 Identify (06h): Supported 00:58:29.015 Abort (08h): Supported 00:58:29.015 Set Features (09h): Supported 00:58:29.015 Get Features (0Ah): Supported 00:58:29.015 Asynchronous Event Request (0Ch): Supported 00:58:29.015 Keep Alive (18h): Supported 00:58:29.015 I/O Commands 00:58:29.015 ------------ 00:58:29.015 Flush (00h): Supported LBA-Change 00:58:29.015 Write (01h): Supported LBA-Change 00:58:29.015 Read (02h): Supported 00:58:29.015 Compare (05h): Supported 00:58:29.015 Write Zeroes (08h): Supported LBA-Change 00:58:29.015 Dataset Management (09h): Supported LBA-Change 00:58:29.015 Copy (19h): Supported LBA-Change 00:58:29.015 00:58:29.015 Error Log 00:58:29.015 ========= 00:58:29.015 00:58:29.015 Arbitration 00:58:29.015 =========== 00:58:29.015 Arbitration Burst: 1 00:58:29.015 00:58:29.015 Power Management 00:58:29.015 ================ 00:58:29.015 Number of Power States: 1 00:58:29.015 Current Power State: Power State #0 00:58:29.015 Power State #0: 00:58:29.015 Max Power: 0.00 W 00:58:29.015 Non-Operational State: Operational 00:58:29.015 Entry Latency: Not Reported 00:58:29.015 Exit Latency: Not Reported 00:58:29.015 Relative Read Throughput: 0 00:58:29.015 Relative Read Latency: 0 00:58:29.015 Relative Write Throughput: 0 00:58:29.015 Relative Write Latency: 0 00:58:29.015 Idle Power: Not Reported 00:58:29.015 Active Power: Not Reported 00:58:29.015 Non-Operational Permissive Mode: Not Supported 00:58:29.015 00:58:29.015 Health Information 00:58:29.015 ================== 00:58:29.015 Critical Warnings: 00:58:29.015 Available Spare Space: OK 00:58:29.015 Temperature: OK 00:58:29.015 Device Reliability: OK 00:58:29.015 Read Only: No 00:58:29.015 Volatile Memory Backup: OK 00:58:29.015 Current Temperature: 0 Kelvin (-273 Celsius) 00:58:29.015 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:58:29.015 Available Spare: 0% 00:58:29.015 Available Spare Threshold: 0% 00:58:29.015 Life Percentage Used:[2024-12-09 11:09:30.086347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.086357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1568690) 00:58:29.015 [2024-12-09 11:09:30.086368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.015 [2024-12-09 11:09:30.086390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15cab80, cid 7, qid 0 00:58:29.015 [2024-12-09 11:09:30.086472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.015 [2024-12-09 11:09:30.086483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.015 [2024-12-09 11:09:30.086490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.086498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15cab80) on tqpair=0x1568690 00:58:29.015 [2024-12-09 11:09:30.086554] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:58:29.015 [2024-12-09 11:09:30.086572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca100) on tqpair=0x1568690 00:58:29.015 [2024-12-09 11:09:30.086582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:29.015 [2024-12-09 11:09:30.086593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca280) on tqpair=0x1568690 00:58:29.015 [2024-12-09 11:09:30.086602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:29.015 [2024-12-09 11:09:30.086612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca400) on tqpair=0x1568690 00:58:29.015 [2024-12-09 11:09:30.086620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:29.015 [2024-12-09 11:09:30.086630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca580) on tqpair=0x1568690 00:58:29.015 [2024-12-09 11:09:30.086639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:29.015 [2024-12-09 11:09:30.086659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.086668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.086675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1568690) 00:58:29.015 [2024-12-09 11:09:30.086687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.015 [2024-12-09 11:09:30.086708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca580, cid 3, qid 0 00:58:29.015 [2024-12-09 11:09:30.086790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.015 [2024-12-09 11:09:30.086801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.015 [2024-12-09 11:09:30.086808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.086815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca580) on tqpair=0x1568690 00:58:29.015 [2024-12-09 11:09:30.086826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.086834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.086841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1568690) 00:58:29.015 [2024-12-09 11:09:30.086852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.015 [2024-12-09 11:09:30.086875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca580, cid 3, qid 0 00:58:29.015 [2024-12-09 11:09:30.086958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.015 [2024-12-09 11:09:30.086969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.015 [2024-12-09 11:09:30.086977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.086984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca580) on tqpair=0x1568690 00:58:29.015 [2024-12-09 11:09:30.086993] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:58:29.015 [2024-12-09 11:09:30.087003] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:58:29.015 [2024-12-09 11:09:30.087022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.087030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.087037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1568690) 00:58:29.015 [2024-12-09 11:09:30.087048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.015 [2024-12-09 11:09:30.087067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca580, cid 3, qid 0 00:58:29.015 [2024-12-09 11:09:30.087140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.015 [2024-12-09 11:09:30.087151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.015 [2024-12-09 11:09:30.087158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.087166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca580) on tqpair=0x1568690 00:58:29.015 [2024-12-09 11:09:30.087181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.087189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.087196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1568690) 00:58:29.015 [2024-12-09 11:09:30.087207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.015 [2024-12-09 11:09:30.087226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca580, cid 3, qid 0 00:58:29.015 [2024-12-09 11:09:30.087291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.015 [2024-12-09 11:09:30.087303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.015 [2024-12-09 11:09:30.087310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.087317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca580) on tqpair=0x1568690 00:58:29.015 [2024-12-09 11:09:30.087332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.087340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.015 [2024-12-09 11:09:30.087348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1568690) 00:58:29.015 [2024-12-09 11:09:30.087358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.015 [2024-12-09 11:09:30.087376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca580, cid 3, qid 0 00:58:29.015 [2024-12-09 11:09:30.087450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.015 [2024-12-09 11:09:30.087460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.016 [2024-12-09 11:09:30.087468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.016 [2024-12-09 11:09:30.087475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca580) on tqpair=0x1568690 00:58:29.016 [2024-12-09 11:09:30.087491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.016 [2024-12-09 11:09:30.087499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.016 [2024-12-09 11:09:30.087506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1568690) 00:58:29.016 [2024-12-09 11:09:30.087517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.016 [2024-12-09 11:09:30.087535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca580, cid 3, qid 0 00:58:29.016 [2024-12-09 11:09:30.087600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.016 [2024-12-09 11:09:30.087611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.016 [2024-12-09 11:09:30.087618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.016 [2024-12-09 11:09:30.087625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca580) on tqpair=0x1568690 00:58:29.016 [2024-12-09 11:09:30.087640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:58:29.016 [2024-12-09 11:09:30.091664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:58:29.016 [2024-12-09 11:09:30.091673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1568690) 00:58:29.016 [2024-12-09 11:09:30.091684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.016 [2024-12-09 11:09:30.091705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ca580, cid 3, qid 0 00:58:29.016 [2024-12-09 11:09:30.091786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:58:29.016 [2024-12-09 11:09:30.091797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:58:29.016 [2024-12-09 11:09:30.091804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:58:29.016 [2024-12-09 11:09:30.091812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ca580) on tqpair=0x1568690 00:58:29.016 [2024-12-09 11:09:30.091826] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:58:29.276 0% 00:58:29.276 Data Units Read: 0 00:58:29.276 Data Units Written: 0 00:58:29.276 Host Read Commands: 0 00:58:29.276 Host Write Commands: 0 00:58:29.276 Controller Busy Time: 0 minutes 00:58:29.276 Power Cycles: 0 00:58:29.276 Power On Hours: 0 hours 00:58:29.276 Unsafe Shutdowns: 0 00:58:29.276 Unrecoverable Media Errors: 0 00:58:29.276 Lifetime Error Log Entries: 0 00:58:29.277 Warning Temperature Time: 0 minutes 00:58:29.277 Critical Temperature Time: 0 minutes 00:58:29.277 00:58:29.277 Number of Queues 00:58:29.277 ================ 00:58:29.277 Number of I/O Submission Queues: 127 00:58:29.277 Number of I/O Completion Queues: 127 00:58:29.277 00:58:29.277 Active Namespaces 00:58:29.277 ================= 00:58:29.277 Namespace ID:1 00:58:29.277 Error Recovery Timeout: Unlimited 00:58:29.277 Command Set Identifier: NVM (00h) 00:58:29.277 Deallocate: Supported 00:58:29.277 Deallocated/Unwritten Error: Not Supported 00:58:29.277 Deallocated Read Value: Unknown 00:58:29.277 Deallocate in Write Zeroes: Not Supported 00:58:29.277 Deallocated Guard Field: 0xFFFF 00:58:29.277 Flush: Supported 00:58:29.277 Reservation: Supported 00:58:29.277 Namespace Sharing Capabilities: Multiple Controllers 00:58:29.277 Size (in LBAs): 131072 (0GiB) 00:58:29.277 Capacity (in LBAs): 131072 (0GiB) 00:58:29.277 Utilization (in LBAs): 131072 (0GiB) 00:58:29.277 NGUID: ABCDEF0123456789ABCDEF0123456789 00:58:29.277 EUI64: ABCDEF0123456789 00:58:29.277 UUID: cc659da9-2a6c-4537-ba58-736b70b0ab4a 00:58:29.277 Thin Provisioning: Not Supported 00:58:29.277 Per-NS Atomic Units: Yes 00:58:29.277 Atomic Boundary Size (Normal): 0 00:58:29.277 Atomic Boundary Size (PFail): 0 00:58:29.277 Atomic Boundary Offset: 0 00:58:29.277 Maximum Single Source Range Length: 65535 00:58:29.277 Maximum Copy Length: 65535 00:58:29.277 Maximum Source Range Count: 1 00:58:29.277 NGUID/EUI64 Never Reused: No 00:58:29.277 Namespace Write Protected: No 00:58:29.277 Number of LBA Formats: 1 00:58:29.277 Current LBA Format: LBA Format #00 00:58:29.277 LBA Format #00: Data Size: 512 Metadata Size: 0 00:58:29.277 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:29.277 rmmod nvme_tcp 00:58:29.277 rmmod nvme_fabrics 00:58:29.277 rmmod nvme_keyring 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2485959 ']' 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2485959 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2485959 ']' 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2485959 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485959 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485959' 00:58:29.277 killing process with pid 2485959 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2485959 00:58:29.277 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2485959 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:29.537 11:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:58:32.080 00:58:32.080 real 0m10.618s 00:58:32.080 user 0m7.884s 00:58:32.080 sys 0m5.360s 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:58:32.080 ************************************ 00:58:32.080 END TEST nvmf_identify 00:58:32.080 ************************************ 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:58:32.080 ************************************ 00:58:32.080 START TEST nvmf_perf 00:58:32.080 ************************************ 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:58:32.080 * Looking for test storage... 00:58:32.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:58:32.080 11:09:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:58:32.080 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:32.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:32.081 --rc genhtml_branch_coverage=1 00:58:32.081 --rc genhtml_function_coverage=1 00:58:32.081 --rc genhtml_legend=1 00:58:32.081 --rc geninfo_all_blocks=1 00:58:32.081 --rc geninfo_unexecuted_blocks=1 00:58:32.081 00:58:32.081 ' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:32.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:32.081 --rc genhtml_branch_coverage=1 00:58:32.081 --rc genhtml_function_coverage=1 00:58:32.081 --rc genhtml_legend=1 00:58:32.081 --rc geninfo_all_blocks=1 00:58:32.081 --rc geninfo_unexecuted_blocks=1 00:58:32.081 00:58:32.081 ' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:32.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:32.081 --rc genhtml_branch_coverage=1 00:58:32.081 --rc genhtml_function_coverage=1 00:58:32.081 --rc genhtml_legend=1 00:58:32.081 --rc geninfo_all_blocks=1 00:58:32.081 --rc geninfo_unexecuted_blocks=1 00:58:32.081 00:58:32.081 ' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:32.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:32.081 --rc genhtml_branch_coverage=1 00:58:32.081 --rc genhtml_function_coverage=1 00:58:32.081 --rc genhtml_legend=1 00:58:32.081 --rc geninfo_all_blocks=1 00:58:32.081 --rc geninfo_unexecuted_blocks=1 00:58:32.081 00:58:32.081 ' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:32.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:58:32.081 11:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:58:40.214 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:58:40.218 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:58:40.218 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:58:40.218 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:58:40.218 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:58:40.218 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:58:40.218 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:58:40.218 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:58:40.218 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:58:40.218 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:58:40.218 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:58:40.219 Found 0000:af:00.0 (0x8086 - 0x159b) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:58:40.219 Found 0000:af:00.1 (0x8086 - 0x159b) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:58:40.219 Found net devices under 0000:af:00.0: cvl_0_0 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:58:40.219 Found net devices under 0000:af:00.1: cvl_0_1 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:58:40.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:40.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:58:40.219 00:58:40.219 --- 10.0.0.2 ping statistics --- 00:58:40.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:40.219 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:58:40.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:40.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:58:40.219 00:58:40.219 --- 10.0.0.1 ping statistics --- 00:58:40.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:40.219 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2489386 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2489386 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2489386 ']' 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:40.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:40.219 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:58:40.219 [2024-12-09 11:09:40.439079] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:58:40.219 [2024-12-09 11:09:40.439157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:40.219 [2024-12-09 11:09:40.570242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:58:40.219 [2024-12-09 11:09:40.623553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:40.219 [2024-12-09 11:09:40.623605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:40.219 [2024-12-09 11:09:40.623626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:40.219 [2024-12-09 11:09:40.623640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:40.219 [2024-12-09 11:09:40.623658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:40.219 [2024-12-09 11:09:40.625399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:40.219 [2024-12-09 11:09:40.625486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:58:40.219 [2024-12-09 11:09:40.625580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:58:40.219 [2024-12-09 11:09:40.625585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:40.220 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:40.220 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:58:40.220 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:58:40.220 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:40.220 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:58:40.220 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:40.220 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:58:40.220 11:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:58:43.514 11:09:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:58:43.514 11:09:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:58:43.514 11:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:58:43.514 11:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:58:43.514 11:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:58:43.514 11:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:58:43.514 11:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:58:43.514 11:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:58:43.514 11:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:58:43.772 [2024-12-09 11:09:44.791466] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:43.772 11:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:58:44.031 11:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:58:44.031 11:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:58:44.290 11:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:58:44.291 11:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:58:44.550 11:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:58:44.810 [2024-12-09 11:09:45.951939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:58:44.810 11:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:58:45.379 11:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:58:45.379 11:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:58:45.379 11:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:58:45.379 11:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:58:46.764 Initializing NVMe Controllers 00:58:46.764 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:58:46.764 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:58:46.764 Initialization complete. Launching workers. 00:58:46.764 ======================================================== 00:58:46.764 Latency(us) 00:58:46.764 Device Information : IOPS MiB/s Average min max 00:58:46.764 PCIE (0000:5e:00.0) NSID 1 from core 0: 71108.75 277.77 449.59 40.01 7397.74 00:58:46.764 ======================================================== 00:58:46.764 Total : 71108.75 277.77 449.59 40.01 7397.74 00:58:46.764 00:58:46.764 11:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:58:48.145 Initializing NVMe Controllers 00:58:48.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:58:48.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:58:48.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:58:48.145 Initialization complete. Launching workers. 00:58:48.145 ======================================================== 00:58:48.145 Latency(us) 00:58:48.145 Device Information : IOPS MiB/s Average min max 00:58:48.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.00 0.24 16334.30 118.09 44704.90 00:58:48.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21815.68 5021.00 47912.17 00:58:48.145 ======================================================== 00:58:48.145 Total : 108.00 0.42 18668.96 118.09 47912.17 00:58:48.145 00:58:48.145 11:09:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:58:49.525 Initializing NVMe Controllers 00:58:49.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:58:49.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:58:49.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:58:49.525 Initialization complete. Launching workers. 00:58:49.525 ======================================================== 00:58:49.525 Latency(us) 00:58:49.525 Device Information : IOPS MiB/s Average min max 00:58:49.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9895.18 38.65 3232.30 515.03 9072.87 00:58:49.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3759.65 14.69 8531.56 5598.59 17719.37 00:58:49.525 ======================================================== 00:58:49.525 Total : 13654.83 53.34 4691.37 515.03 17719.37 00:58:49.525 00:58:49.785 11:09:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:58:49.785 11:09:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:58:49.785 11:09:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:58:52.323 Initializing NVMe Controllers 00:58:52.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:58:52.323 Controller IO queue size 128, less than required. 00:58:52.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:58:52.323 Controller IO queue size 128, less than required. 00:58:52.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:58:52.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:58:52.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:58:52.323 Initialization complete. Launching workers. 00:58:52.323 ======================================================== 00:58:52.323 Latency(us) 00:58:52.323 Device Information : IOPS MiB/s Average min max 00:58:52.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1353.65 338.41 97113.52 71123.17 142486.50 00:58:52.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 564.02 141.00 232288.72 87039.58 350453.49 00:58:52.323 ======================================================== 00:58:52.323 Total : 1917.66 479.42 136870.93 71123.17 350453.49 00:58:52.323 00:58:52.323 11:09:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:58:52.583 No valid NVMe controllers or AIO or URING devices found 00:58:52.583 Initializing NVMe Controllers 00:58:52.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:58:52.583 Controller IO queue size 128, less than required. 00:58:52.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:58:52.583 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:58:52.583 Controller IO queue size 128, less than required. 00:58:52.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:58:52.583 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:58:52.583 WARNING: Some requested NVMe devices were skipped 00:58:52.583 11:09:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:58:55.879 Initializing NVMe Controllers 00:58:55.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:58:55.879 Controller IO queue size 128, less than required. 00:58:55.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:58:55.879 Controller IO queue size 128, less than required. 00:58:55.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:58:55.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:58:55.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:58:55.879 Initialization complete. Launching workers. 00:58:55.879 00:58:55.879 ==================== 00:58:55.879 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:58:55.879 TCP transport: 00:58:55.879 polls: 6409 00:58:55.879 idle_polls: 3360 00:58:55.879 sock_completions: 3049 00:58:55.879 nvme_completions: 5125 00:58:55.879 submitted_requests: 7684 00:58:55.879 queued_requests: 1 00:58:55.879 00:58:55.879 ==================== 00:58:55.879 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:58:55.879 TCP transport: 00:58:55.879 polls: 6349 00:58:55.879 idle_polls: 3366 00:58:55.879 sock_completions: 2983 00:58:55.879 nvme_completions: 5193 00:58:55.879 submitted_requests: 7838 00:58:55.879 queued_requests: 1 00:58:55.879 ======================================================== 00:58:55.879 Latency(us) 00:58:55.879 Device Information : IOPS MiB/s Average min max 00:58:55.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1280.92 320.23 103276.34 68316.50 182664.00 00:58:55.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1297.91 324.48 99396.00 55081.51 138910.39 00:58:55.879 ======================================================== 00:58:55.879 Total : 2578.83 644.71 101323.38 55081.51 182664.00 00:58:55.879 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:55.879 rmmod nvme_tcp 00:58:55.879 rmmod nvme_fabrics 00:58:55.879 rmmod nvme_keyring 00:58:55.879 11:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:55.879 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:58:55.879 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:58:55.879 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2489386 ']' 00:58:55.879 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2489386 00:58:55.879 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2489386 ']' 00:58:55.879 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2489386 00:58:55.879 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:58:55.879 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:55.880 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2489386 00:58:56.139 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:56.139 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:56.139 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2489386' 00:58:56.139 killing process with pid 2489386 00:58:56.139 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2489386 00:58:56.139 11:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2489386 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:59:00.338 11:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:59:02.246 00:59:02.246 real 0m30.206s 00:59:02.246 user 1m22.017s 00:59:02.246 sys 0m9.577s 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:59:02.246 ************************************ 00:59:02.246 END TEST nvmf_perf 00:59:02.246 ************************************ 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:59:02.246 ************************************ 00:59:02.246 START TEST nvmf_fio_host 00:59:02.246 ************************************ 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:59:02.246 * Looking for test storage... 00:59:02.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:02.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:02.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:02.247 --rc genhtml_branch_coverage=1 00:59:02.247 --rc genhtml_function_coverage=1 00:59:02.247 --rc genhtml_legend=1 00:59:02.247 --rc geninfo_all_blocks=1 00:59:02.247 --rc geninfo_unexecuted_blocks=1 00:59:02.247 00:59:02.247 ' 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:02.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:02.247 --rc genhtml_branch_coverage=1 00:59:02.247 --rc genhtml_function_coverage=1 00:59:02.247 --rc genhtml_legend=1 00:59:02.247 --rc geninfo_all_blocks=1 00:59:02.247 --rc geninfo_unexecuted_blocks=1 00:59:02.247 00:59:02.247 ' 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:02.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:02.247 --rc genhtml_branch_coverage=1 00:59:02.247 --rc genhtml_function_coverage=1 00:59:02.247 --rc genhtml_legend=1 00:59:02.247 --rc geninfo_all_blocks=1 00:59:02.247 --rc geninfo_unexecuted_blocks=1 00:59:02.247 00:59:02.247 ' 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:02.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:02.247 --rc genhtml_branch_coverage=1 00:59:02.247 --rc genhtml_function_coverage=1 00:59:02.247 --rc genhtml_legend=1 00:59:02.247 --rc geninfo_all_blocks=1 00:59:02.247 --rc geninfo_unexecuted_blocks=1 00:59:02.247 00:59:02.247 ' 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:02.247 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:59:02.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:59:02.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:59:09.083 Found 0000:af:00.0 (0x8086 - 0x159b) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:59:09.083 Found 0000:af:00.1 (0x8086 - 0x159b) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:59:09.083 Found net devices under 0000:af:00.0: cvl_0_0 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:59:09.083 Found net devices under 0000:af:00.1: cvl_0_1 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:59:09.083 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:59:09.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:59:09.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:59:09.084 00:59:09.084 --- 10.0.0.2 ping statistics --- 00:59:09.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:09.084 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:59:09.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:59:09.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:59:09.084 00:59:09.084 --- 10.0.0.1 ping statistics --- 00:59:09.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:09.084 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2495162 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2495162 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2495162 ']' 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:09.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:09.084 11:10:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:59:09.084 [2024-12-09 11:10:09.936188] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:59:09.084 [2024-12-09 11:10:09.936242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:59:09.084 [2024-12-09 11:10:10.050556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:59:09.084 [2024-12-09 11:10:10.103495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:09.084 [2024-12-09 11:10:10.103543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:09.084 [2024-12-09 11:10:10.103559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:09.084 [2024-12-09 11:10:10.103573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:09.084 [2024-12-09 11:10:10.103585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:09.084 [2024-12-09 11:10:10.105397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:09.084 [2024-12-09 11:10:10.105422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:59:09.084 [2024-12-09 11:10:10.105494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:59:09.084 [2024-12-09 11:10:10.105498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:09.084 11:10:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:09.084 11:10:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:59:09.084 11:10:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:59:09.343 [2024-12-09 11:10:10.468547] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:09.343 11:10:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:59:09.343 11:10:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:09.343 11:10:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:59:09.601 11:10:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:59:09.860 Malloc1 00:59:09.860 11:10:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:59:10.120 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:59:10.380 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:10.380 [2024-12-09 11:10:11.505256] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:10.380 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:59:10.639 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:10.640 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:59:10.898 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:10.898 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:10.898 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:10.898 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:59:10.898 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:59:10.898 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:10.898 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:10.898 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:10.898 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:59:10.898 11:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:59:11.156 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:59:11.156 fio-3.35 00:59:11.156 Starting 1 thread 00:59:13.678 00:59:13.678 test: (groupid=0, jobs=1): err= 0: pid=2495626: Mon Dec 9 11:10:14 2024 00:59:13.678 read: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.1MiB/2005msec) 00:59:13.678 slat (usec): min=2, max=261, avg= 2.50, stdev= 2.42 00:59:13.678 clat (usec): min=3319, max=9855, avg=5918.39, stdev=422.12 00:59:13.678 lat (usec): min=3357, max=9857, avg=5920.88, stdev=422.12 00:59:13.678 clat percentiles (usec): 00:59:13.678 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:59:13.678 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:59:13.678 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:59:13.678 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 8094], 99.95th=[ 8979], 00:59:13.678 | 99.99th=[ 9765] 00:59:13.678 bw ( KiB/s): min=46512, max=48288, per=99.97%, avg=47536.00, stdev=758.24, samples=4 00:59:13.678 iops : min=11628, max=12072, avg=11884.00, stdev=189.56, samples=4 00:59:13.678 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec); 0 zone resets 00:59:13.678 slat (usec): min=2, max=233, avg= 2.59, stdev= 1.74 00:59:13.678 clat (usec): min=2612, max=9469, avg=4828.03, stdev=372.61 00:59:13.678 lat (usec): min=2627, max=9472, avg=4830.62, stdev=372.70 00:59:13.678 clat percentiles (usec): 00:59:13.678 | 1.00th=[ 3982], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:59:13.678 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:59:13.678 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5342], 00:59:13.678 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 7963], 99.95th=[ 8717], 00:59:13.678 | 99.99th=[ 9110] 00:59:13.678 bw ( KiB/s): min=47072, max=47872, per=100.00%, avg=47328.00, stdev=372.27, samples=4 00:59:13.678 iops : min=11768, max=11968, avg=11832.00, stdev=93.07, samples=4 00:59:13.678 lat (msec) : 4=0.57%, 10=99.43% 00:59:13.678 cpu : usr=74.50%, sys=23.40%, ctx=161, majf=0, minf=2 00:59:13.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:59:13.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:13.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:59:13.678 issued rwts: total=23835,23724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:13.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:59:13.678 00:59:13.678 Run status group 0 (all jobs): 00:59:13.678 READ: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.6MB), run=2005-2005msec 00:59:13.678 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:59:13.678 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:59:13.678 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:59:13.678 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:59:13.678 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:59:13.678 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:59:13.678 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:59:13.678 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:59:13.678 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:59:13.678 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:13.678 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:59:13.679 11:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:59:13.936 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:59:13.936 fio-3.35 00:59:13.936 Starting 1 thread 00:59:16.479 [2024-12-09 11:10:17.376624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1508490 is same with the state(6) to be set 00:59:16.479 00:59:16.479 test: (groupid=0, jobs=1): err= 0: pid=2496070: Mon Dec 9 11:10:17 2024 00:59:16.479 read: IOPS=9605, BW=150MiB/s (157MB/s)(301MiB/2007msec) 00:59:16.479 slat (nsec): min=2502, max=90541, avg=2791.41, stdev=1331.29 00:59:16.479 clat (usec): min=1266, max=49020, avg=7590.16, stdev=3042.29 00:59:16.479 lat (usec): min=1268, max=49023, avg=7592.95, stdev=3042.33 00:59:16.479 clat percentiles (usec): 00:59:16.479 | 1.00th=[ 3556], 5.00th=[ 4490], 10.00th=[ 5080], 20.00th=[ 5866], 00:59:16.479 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7439], 60.00th=[ 7701], 00:59:16.479 | 70.00th=[ 8225], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[10814], 00:59:16.479 | 99.00th=[13042], 99.50th=[15270], 99.90th=[46924], 99.95th=[47449], 00:59:16.479 | 99.99th=[49021] 00:59:16.479 bw ( KiB/s): min=68128, max=86464, per=49.90%, avg=76696.00, stdev=7589.93, samples=4 00:59:16.479 iops : min= 4258, max= 5404, avg=4793.50, stdev=474.37, samples=4 00:59:16.479 write: IOPS=5650, BW=88.3MiB/s (92.6MB/s)(157MiB/1783msec); 0 zone resets 00:59:16.479 slat (usec): min=28, max=385, avg=31.21, stdev= 7.08 00:59:16.479 clat (usec): min=4686, max=53762, avg=10182.38, stdev=3904.26 00:59:16.479 lat (usec): min=4716, max=53792, avg=10213.59, stdev=3904.60 00:59:16.479 clat percentiles (usec): 00:59:16.479 | 1.00th=[ 6063], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7635], 00:59:16.479 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10290], 00:59:16.479 | 70.00th=[11469], 80.00th=[12387], 90.00th=[13829], 95.00th=[14746], 00:59:16.479 | 99.00th=[16450], 99.50th=[48497], 99.90th=[50070], 99.95th=[53216], 00:59:16.479 | 99.99th=[53740] 00:59:16.479 bw ( KiB/s): min=71424, max=89568, per=88.65%, avg=80144.00, stdev=7770.42, samples=4 00:59:16.479 iops : min= 4464, max= 5598, avg=5009.00, stdev=485.65, samples=4 00:59:16.479 lat (msec) : 2=0.07%, 4=1.49%, 10=78.21%, 20=19.74%, 50=0.44% 00:59:16.479 lat (msec) : 100=0.04% 00:59:16.479 cpu : usr=81.75%, sys=17.50%, ctx=57, majf=0, minf=2 00:59:16.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:59:16.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:16.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:59:16.479 issued rwts: total=19279,10074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:16.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:59:16.479 00:59:16.480 Run status group 0 (all jobs): 00:59:16.480 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=301MiB (316MB), run=2007-2007msec 00:59:16.480 WRITE: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=157MiB (165MB), run=1783-1783msec 00:59:16.480 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:59:16.741 rmmod nvme_tcp 00:59:16.741 rmmod nvme_fabrics 00:59:16.741 rmmod nvme_keyring 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2495162 ']' 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2495162 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2495162 ']' 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2495162 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2495162 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2495162' 00:59:16.741 killing process with pid 2495162 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2495162 00:59:16.741 11:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2495162 00:59:16.999 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:59:16.999 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:59:16.999 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:59:16.999 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:59:16.999 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:59:16.999 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:59:16.999 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:59:16.999 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:59:16.999 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:59:16.999 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:17.000 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:59:17.000 11:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:59:19.533 00:59:19.533 real 0m17.014s 00:59:19.533 user 0m43.979s 00:59:19.533 sys 0m7.053s 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:59:19.533 ************************************ 00:59:19.533 END TEST nvmf_fio_host 00:59:19.533 ************************************ 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:59:19.533 ************************************ 00:59:19.533 START TEST nvmf_failover 00:59:19.533 ************************************ 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:59:19.533 * Looking for test storage... 00:59:19.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:19.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:19.533 --rc genhtml_branch_coverage=1 00:59:19.533 --rc genhtml_function_coverage=1 00:59:19.533 --rc genhtml_legend=1 00:59:19.533 --rc geninfo_all_blocks=1 00:59:19.533 --rc geninfo_unexecuted_blocks=1 00:59:19.533 00:59:19.533 ' 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:19.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:19.533 --rc genhtml_branch_coverage=1 00:59:19.533 --rc genhtml_function_coverage=1 00:59:19.533 --rc genhtml_legend=1 00:59:19.533 --rc geninfo_all_blocks=1 00:59:19.533 --rc geninfo_unexecuted_blocks=1 00:59:19.533 00:59:19.533 ' 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:19.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:19.533 --rc genhtml_branch_coverage=1 00:59:19.533 --rc genhtml_function_coverage=1 00:59:19.533 --rc genhtml_legend=1 00:59:19.533 --rc geninfo_all_blocks=1 00:59:19.533 --rc geninfo_unexecuted_blocks=1 00:59:19.533 00:59:19.533 ' 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:19.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:19.533 --rc genhtml_branch_coverage=1 00:59:19.533 --rc genhtml_function_coverage=1 00:59:19.533 --rc genhtml_legend=1 00:59:19.533 --rc geninfo_all_blocks=1 00:59:19.533 --rc geninfo_unexecuted_blocks=1 00:59:19.533 00:59:19.533 ' 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:19.533 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:59:19.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:59:19.534 11:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:59:26.109 Found 0000:af:00.0 (0x8086 - 0x159b) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:59:26.109 Found 0000:af:00.1 (0x8086 - 0x159b) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:59:26.109 Found net devices under 0000:af:00.0: cvl_0_0 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:59:26.109 Found net devices under 0000:af:00.1: cvl_0_1 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:59:26.109 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:59:26.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:59:26.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:59:26.110 00:59:26.110 --- 10.0.0.2 ping statistics --- 00:59:26.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:26.110 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:59:26.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:59:26.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:59:26.110 00:59:26.110 --- 10.0.0.1 ping statistics --- 00:59:26.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:26.110 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2499481 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2499481 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2499481 ']' 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:26.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:26.110 11:10:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:59:26.110 [2024-12-09 11:10:26.857532] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:59:26.110 [2024-12-09 11:10:26.857607] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:59:26.110 [2024-12-09 11:10:26.960072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:59:26.110 [2024-12-09 11:10:27.006316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:26.110 [2024-12-09 11:10:27.006357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:26.110 [2024-12-09 11:10:27.006368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:26.110 [2024-12-09 11:10:27.006378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:26.110 [2024-12-09 11:10:27.006386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:26.110 [2024-12-09 11:10:27.007751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:59:26.110 [2024-12-09 11:10:27.007767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:59:26.110 [2024-12-09 11:10:27.007770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:26.110 11:10:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:26.110 11:10:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:59:26.110 11:10:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:59:26.110 11:10:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:26.110 11:10:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:59:26.110 11:10:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:59:26.110 11:10:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:59:26.370 [2024-12-09 11:10:27.420159] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:26.370 11:10:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:59:26.629 Malloc0 00:59:26.629 11:10:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:59:26.888 11:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:59:27.456 11:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:27.456 [2024-12-09 11:10:28.591229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:27.456 11:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:59:27.715 [2024-12-09 11:10:28.876074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:59:27.975 11:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:59:28.236 [2024-12-09 11:10:29.165051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:59:28.236 11:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2499857 00:59:28.236 11:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:59:28.236 11:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2499857 /var/tmp/bdevperf.sock 00:59:28.236 11:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2499857 ']' 00:59:28.236 11:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:59:28.236 11:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:59:28.236 11:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:28.236 11:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:59:28.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:59:28.236 11:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:28.236 11:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:59:29.176 11:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:29.176 11:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:59:29.176 11:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:59:29.435 NVMe0n1 00:59:29.435 11:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:59:29.694 00:59:29.954 11:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2500039 00:59:29.954 11:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:59:29.954 11:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:59:30.893 11:10:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:31.154 [2024-12-09 11:10:32.078773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.078991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 [2024-12-09 11:10:32.079113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783360 is same with the state(6) to be set 00:59:31.154 11:10:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:59:34.448 11:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:59:34.448 00:59:34.707 11:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:59:34.967 [2024-12-09 11:10:35.923896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.967 [2024-12-09 11:10:35.923942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.923953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.923969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.923980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.923989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.923999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 [2024-12-09 11:10:35.924792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783fc0 is same with the state(6) to be set 00:59:34.968 11:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:59:38.259 11:10:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:38.259 [2024-12-09 11:10:39.222323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:38.259 11:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:59:39.194 11:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:59:39.453 11:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2500039 00:59:46.026 { 00:59:46.026 "results": [ 00:59:46.026 { 00:59:46.026 "job": "NVMe0n1", 00:59:46.026 "core_mask": "0x1", 00:59:46.026 "workload": "verify", 00:59:46.026 "status": "finished", 00:59:46.026 "verify_range": { 00:59:46.026 "start": 0, 00:59:46.026 "length": 16384 00:59:46.026 }, 00:59:46.026 "queue_depth": 128, 00:59:46.026 "io_size": 4096, 00:59:46.026 "runtime": 15.012833, 00:59:46.026 "iops": 10219.789962360868, 00:59:46.026 "mibps": 39.92105454047214, 00:59:46.026 "io_failed": 6117, 00:59:46.026 "io_timeout": 0, 00:59:46.026 "avg_latency_us": 12011.903524037787, 00:59:46.026 "min_latency_us": 612.6191304347826, 00:59:46.026 "max_latency_us": 37839.91652173913 00:59:46.026 } 00:59:46.026 ], 00:59:46.026 "core_count": 1 00:59:46.026 } 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2499857 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2499857 ']' 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2499857 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2499857 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2499857' 00:59:46.026 killing process with pid 2499857 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2499857 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2499857 00:59:46.026 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:59:46.026 [2024-12-09 11:10:29.250920] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:59:46.026 [2024-12-09 11:10:29.251009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499857 ] 00:59:46.026 [2024-12-09 11:10:29.381241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:46.026 [2024-12-09 11:10:29.436406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:46.026 Running I/O for 15 seconds... 00:59:46.026 10012.00 IOPS, 39.11 MiB/s [2024-12-09T10:10:47.202Z] [2024-12-09 11:10:32.080260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.080974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.080991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.026 [2024-12-09 11:10:32.081006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.026 [2024-12-09 11:10:32.081022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.081914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.027 [2024-12-09 11:10:32.081945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.027 [2024-12-09 11:10:32.081977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.081994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.027 [2024-12-09 11:10:32.082008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.082025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.027 [2024-12-09 11:10:32.082040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.082057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.027 [2024-12-09 11:10:32.082072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.082088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.027 [2024-12-09 11:10:32.082103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.082120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.027 [2024-12-09 11:10:32.082134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.082151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.082165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.082182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.082197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.082213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.082228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.082244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.082260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.082276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.082292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.027 [2024-12-09 11:10:32.082309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.027 [2024-12-09 11:10:32.082324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.082970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.082987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.028 [2024-12-09 11:10:32.083477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.028 [2024-12-09 11:10:32.083525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88528 len:8 PRP1 0x0 PRP2 0x0 00:59:46.028 [2024-12-09 11:10:32.083540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.028 [2024-12-09 11:10:32.083602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.028 [2024-12-09 11:10:32.083632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.028 [2024-12-09 11:10:32.083655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.028 [2024-12-09 11:10:32.083670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.083686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.029 [2024-12-09 11:10:32.083703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.083718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0790 is same with the state(6) to be set 00:59:46.029 [2024-12-09 11:10:32.083979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.083994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88536 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88544 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88552 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88560 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88568 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88576 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87616 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87624 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87632 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87640 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87648 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87656 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87664 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87672 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87680 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87688 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87696 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87704 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.084948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.084960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.084971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87712 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.084985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.085000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.085012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.085025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87720 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.085039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.085056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.085068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.085079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87728 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.085094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.085108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.085119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.085132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87736 len:8 PRP1 0x0 PRP2 0x0 00:59:46.029 [2024-12-09 11:10:32.085146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.029 [2024-12-09 11:10:32.085160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.029 [2024-12-09 11:10:32.085172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.029 [2024-12-09 11:10:32.085184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87744 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.085198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.085213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.085224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.085238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87752 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.085252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.085267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.085279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.085290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87760 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.085304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.085320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.085331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.085344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87768 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.085357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.085372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.085383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.085396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87776 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.085410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.085424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.085435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.085449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87784 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.085465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.085480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.085491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.085503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87792 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.085518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.085533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.085544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.085556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87800 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.085570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.085585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.085596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.085608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87808 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.085622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.085637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.085653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.085666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87816 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.085681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.097536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.097558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.097575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87824 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.097596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.097616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.097631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.097655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87832 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.097674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.097695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.097710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.097726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87840 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.097746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.097765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.097784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.097803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87848 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.097823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.097842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.097858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.097874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87856 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.097894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.097913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.097929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.097945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87864 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.097964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.097985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.098001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.098018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87872 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.098037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.098057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.098072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.098091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87880 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.098110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.098131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.098147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.098164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87888 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.098184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.098204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.098220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.098236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87896 len:8 PRP1 0x0 PRP2 0x0 00:59:46.030 [2024-12-09 11:10:32.098256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.030 [2024-12-09 11:10:32.098276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.030 [2024-12-09 11:10:32.098291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.030 [2024-12-09 11:10:32.098308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87904 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.098327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.098350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.098366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.098383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87912 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.098402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.098422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.098438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.098455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87920 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.098474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.098494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.098509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.098525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87928 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.098545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.098565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.098581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.098597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87936 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.098616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.098636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.098659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.098676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87944 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.098695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.098715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.098732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.098748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87952 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.098767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.098787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.098803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.098819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87960 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.098839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.098859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.098874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.098890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87968 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.098912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.098933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.098948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.098965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87976 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.098984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87984 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87992 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88000 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88008 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88016 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88024 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88032 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88040 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88048 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88056 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88064 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88072 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88080 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.099941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.099957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.099973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88088 len:8 PRP1 0x0 PRP2 0x0 00:59:46.031 [2024-12-09 11:10:32.099992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.031 [2024-12-09 11:10:32.100012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.031 [2024-12-09 11:10:32.100030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.031 [2024-12-09 11:10:32.100048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88096 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88104 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88112 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88120 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88128 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88136 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88144 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88152 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88160 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88168 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88176 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88184 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.100927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87560 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.100947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.100967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.100983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87568 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87576 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87584 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87592 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87600 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87608 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88192 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88200 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88208 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88216 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.032 [2024-12-09 11:10:32.101731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88224 len:8 PRP1 0x0 PRP2 0x0 00:59:46.032 [2024-12-09 11:10:32.101750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.032 [2024-12-09 11:10:32.101770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.032 [2024-12-09 11:10:32.101786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.101802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88232 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.101821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.101841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.101856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.101873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88240 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.101892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.101913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.101928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.101944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88248 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.101964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.101984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.101999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88256 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88264 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88272 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88280 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88288 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88296 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88304 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88312 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88320 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88328 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88336 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88344 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88352 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.102938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.102954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88360 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.102973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.102993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.103011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.103027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.103047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.103066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.103082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.103099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88376 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.103118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.103138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.103153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.103170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88384 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.103189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.103209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.103224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.103241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88392 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.103260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.103280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.103296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.103312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88400 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.103332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.103351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.103367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.103386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88408 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.103405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.103425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.103440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.033 [2024-12-09 11:10:32.103457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88416 len:8 PRP1 0x0 PRP2 0x0 00:59:46.033 [2024-12-09 11:10:32.103476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.033 [2024-12-09 11:10:32.103496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.033 [2024-12-09 11:10:32.103511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.103528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88424 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.103547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.103567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.103584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.103600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88432 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.103620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.103641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.103663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.103680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88440 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.103700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.103720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.103735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.103752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88448 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.103771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.103791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.103807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.103823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88456 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.103842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.103863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.103878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.103895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88464 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.103914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.103934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.103953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.103969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88472 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.103989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.104009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.104025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.104041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88480 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.104061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.104081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.104097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.104113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88488 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.104132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.104153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.104170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.104187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88496 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.104205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.104226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.104241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.104258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88504 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.104278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.104298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.104314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.104331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88512 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.104350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.104370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.104386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.104402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88520 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.104422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.110231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.034 [2024-12-09 11:10:32.110253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.034 [2024-12-09 11:10:32.110270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88528 len:8 PRP1 0x0 PRP2 0x0 00:59:46.034 [2024-12-09 11:10:32.110290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:32.110362] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:59:46.034 [2024-12-09 11:10:32.110385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:59:46.034 [2024-12-09 11:10:32.110453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f0790 (9): Bad file descriptor 00:59:46.034 [2024-12-09 11:10:32.115987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:59:46.034 [2024-12-09 11:10:32.231738] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:59:46.034 9347.50 IOPS, 36.51 MiB/s [2024-12-09T10:10:47.210Z] 9836.00 IOPS, 38.42 MiB/s [2024-12-09T10:10:47.210Z] 10067.75 IOPS, 39.33 MiB/s [2024-12-09T10:10:47.210Z] [2024-12-09 11:10:35.926815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.926861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.926886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.926903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.926921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.926937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.926955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.926971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.926988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.927004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.927021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.927037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.927054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.927070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.927087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.927102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.927120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.927135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.927153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.927169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.927191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.927207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.927225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.927240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.034 [2024-12-09 11:10:35.927257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.034 [2024-12-09 11:10:35.927273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.927977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.927994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.035 [2024-12-09 11:10:35.928475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.035 [2024-12-09 11:10:35.928493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.036 [2024-12-09 11:10:35.928508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.036 [2024-12-09 11:10:35.928541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.036 [2024-12-09 11:10:35.928573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.036 [2024-12-09 11:10:35.928606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.036 [2024-12-09 11:10:35.928638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.036 [2024-12-09 11:10:35.928678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.036 [2024-12-09 11:10:35.928710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.036 [2024-12-09 11:10:35.928743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.036 [2024-12-09 11:10:35.928776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.036 [2024-12-09 11:10:35.928808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.928841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.928875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.928908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.928940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.928973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.928990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.036 [2024-12-09 11:10:35.929812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.036 [2024-12-09 11:10:35.929833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.929848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.929865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.929880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.929897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.929911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.929928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.929943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.929960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.929975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.929992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.037 [2024-12-09 11:10:35.930007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.037 [2024-12-09 11:10:35.930039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.037 [2024-12-09 11:10:35.930070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.037 [2024-12-09 11:10:35.930105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.930972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.930987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.931004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.037 [2024-12-09 11:10:35.931018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.931049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.037 [2024-12-09 11:10:35.931062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.037 [2024-12-09 11:10:35.931077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39544 len:8 PRP1 0x0 PRP2 0x0 00:59:46.037 [2024-12-09 11:10:35.931091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.931151] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:59:46.037 [2024-12-09 11:10:35.931183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.037 [2024-12-09 11:10:35.931200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.037 [2024-12-09 11:10:35.931215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.037 [2024-12-09 11:10:35.931230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:35.931246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.038 [2024-12-09 11:10:35.931261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:35.931276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.038 [2024-12-09 11:10:35.931291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:35.931305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:59:46.038 [2024-12-09 11:10:35.931339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f0790 (9): Bad file descriptor 00:59:46.038 [2024-12-09 11:10:35.935437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:59:46.038 [2024-12-09 11:10:35.957462] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:59:46.038 10107.00 IOPS, 39.48 MiB/s [2024-12-09T10:10:47.214Z] 10202.33 IOPS, 39.85 MiB/s [2024-12-09T10:10:47.214Z] 10271.14 IOPS, 40.12 MiB/s [2024-12-09T10:10:47.214Z] 10295.62 IOPS, 40.22 MiB/s [2024-12-09T10:10:47.214Z] 10253.11 IOPS, 40.05 MiB/s [2024-12-09T10:10:47.214Z] [2024-12-09 11:10:40.521089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.038 [2024-12-09 11:10:40.521457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.038 [2024-12-09 11:10:40.521492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.038 [2024-12-09 11:10:40.521525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.038 [2024-12-09 11:10:40.521558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.038 [2024-12-09 11:10:40.521594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.038 [2024-12-09 11:10:40.521627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.038 [2024-12-09 11:10:40.521668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.521974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.521989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.522008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.522023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.522040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.522055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.522072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.522087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.522103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.522118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.522136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.522151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.522168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.522182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.522199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.522214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.522232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.522246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.522264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.522279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.038 [2024-12-09 11:10:40.522296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.038 [2024-12-09 11:10:40.522311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.039 [2024-12-09 11:10:40.522343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.039 [2024-12-09 11:10:40.522374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.039 [2024-12-09 11:10:40.522409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.039 [2024-12-09 11:10:40.522440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.039 [2024-12-09 11:10:40.522472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:46.039 [2024-12-09 11:10:40.522504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.522973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.522989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.039 [2024-12-09 11:10:40.523467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.039 [2024-12-09 11:10:40.523484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.523970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.523985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:46.040 [2024-12-09 11:10:40.524593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.040 [2024-12-09 11:10:40.524656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30696 len:8 PRP1 0x0 PRP2 0x0 00:59:46.040 [2024-12-09 11:10:40.524672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.040 [2024-12-09 11:10:40.524701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.040 [2024-12-09 11:10:40.524713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30704 len:8 PRP1 0x0 PRP2 0x0 00:59:46.040 [2024-12-09 11:10:40.524728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.040 [2024-12-09 11:10:40.524754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.040 [2024-12-09 11:10:40.524766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30712 len:8 PRP1 0x0 PRP2 0x0 00:59:46.040 [2024-12-09 11:10:40.524780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.040 [2024-12-09 11:10:40.524795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.040 [2024-12-09 11:10:40.524809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.040 [2024-12-09 11:10:40.524820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30720 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.524834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.524849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.524863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.524875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30728 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.524891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.524906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.524918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.524930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30736 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.524944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.524959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.524971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.524983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30744 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.524998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30752 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30760 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30768 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30776 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30784 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30792 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30800 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30808 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30816 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30824 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30832 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30840 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30848 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:46.041 [2024-12-09 11:10:40.525733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:46.041 [2024-12-09 11:10:40.525745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30856 len:8 PRP1 0x0 PRP2 0x0 00:59:46.041 [2024-12-09 11:10:40.525759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525817] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:59:46.041 [2024-12-09 11:10:40.525853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.041 [2024-12-09 11:10:40.525869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.041 [2024-12-09 11:10:40.525900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.041 [2024-12-09 11:10:40.525930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:59:46.041 [2024-12-09 11:10:40.525960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:46.041 [2024-12-09 11:10:40.525976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:59:46.041 [2024-12-09 11:10:40.526010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f0790 (9): Bad file descriptor 00:59:46.041 [2024-12-09 11:10:40.530102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:59:46.041 [2024-12-09 11:10:40.566148] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:59:46.041 10166.90 IOPS, 39.71 MiB/s [2024-12-09T10:10:47.217Z] 10238.27 IOPS, 39.99 MiB/s [2024-12-09T10:10:47.217Z] 10319.42 IOPS, 40.31 MiB/s [2024-12-09T10:10:47.217Z] 10294.00 IOPS, 40.21 MiB/s [2024-12-09T10:10:47.217Z] 10248.36 IOPS, 40.03 MiB/s 00:59:46.041 Latency(us) 00:59:46.041 [2024-12-09T10:10:47.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:59:46.041 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:59:46.041 Verification LBA range: start 0x0 length 0x4000 00:59:46.041 NVMe0n1 : 15.01 10219.79 39.92 407.45 0.00 12011.90 612.62 37839.92 00:59:46.041 [2024-12-09T10:10:47.217Z] =================================================================================================================== 00:59:46.041 [2024-12-09T10:10:47.217Z] Total : 10219.79 39.92 407.45 0.00 12011.90 612.62 37839.92 00:59:46.041 Received shutdown signal, test time was about 15.000000 seconds 00:59:46.041 00:59:46.041 Latency(us) 00:59:46.041 [2024-12-09T10:10:47.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:59:46.041 [2024-12-09T10:10:47.217Z] =================================================================================================================== 00:59:46.042 [2024-12-09T10:10:47.218Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2501992 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2501992 /var/tmp/bdevperf.sock 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2501992 ']' 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:59:46.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:59:46.042 [2024-12-09 11:10:46.979349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:59:46.042 11:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:59:46.042 [2024-12-09 11:10:47.183954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:59:46.301 11:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:59:46.301 NVMe0n1 00:59:46.301 11:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:59:46.868 00:59:46.868 11:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:59:47.127 00:59:47.386 11:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:59:47.386 11:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:59:47.645 11:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:59:47.904 11:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:59:51.213 11:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:59:51.213 11:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:59:51.213 11:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2502712 00:59:51.213 11:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:59:51.213 11:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2502712 00:59:52.151 { 00:59:52.151 "results": [ 00:59:52.151 { 00:59:52.151 "job": "NVMe0n1", 00:59:52.151 "core_mask": "0x1", 00:59:52.151 "workload": "verify", 00:59:52.151 "status": "finished", 00:59:52.152 "verify_range": { 00:59:52.152 "start": 0, 00:59:52.152 "length": 16384 00:59:52.152 }, 00:59:52.152 "queue_depth": 128, 00:59:52.152 "io_size": 4096, 00:59:52.152 "runtime": 1.011165, 00:59:52.152 "iops": 9837.168019067116, 00:59:52.152 "mibps": 38.42643757448092, 00:59:52.152 "io_failed": 0, 00:59:52.152 "io_timeout": 0, 00:59:52.152 "avg_latency_us": 12937.151394914788, 00:59:52.152 "min_latency_us": 1346.3373913043479, 00:59:52.152 "max_latency_us": 14360.932173913043 00:59:52.152 } 00:59:52.152 ], 00:59:52.152 "core_count": 1 00:59:52.152 } 00:59:52.152 11:10:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:59:52.152 [2024-12-09 11:10:46.439302] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 00:59:52.152 [2024-12-09 11:10:46.439392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501992 ] 00:59:52.152 [2024-12-09 11:10:46.568814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:52.152 [2024-12-09 11:10:46.618777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:52.152 [2024-12-09 11:10:48.835879] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:59:52.152 [2024-12-09 11:10:48.835944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:59:52.152 [2024-12-09 11:10:48.835964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:52.152 [2024-12-09 11:10:48.835981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:59:52.152 [2024-12-09 11:10:48.835996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:52.152 [2024-12-09 11:10:48.836012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:59:52.152 [2024-12-09 11:10:48.836027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:52.152 [2024-12-09 11:10:48.836042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:59:52.152 [2024-12-09 11:10:48.836057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:52.152 [2024-12-09 11:10:48.836072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:59:52.152 [2024-12-09 11:10:48.836113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:59:52.152 [2024-12-09 11:10:48.836138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc790 (9): Bad file descriptor 00:59:52.152 [2024-12-09 11:10:48.846895] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:59:52.152 Running I/O for 1 seconds... 00:59:52.152 9817.00 IOPS, 38.35 MiB/s 00:59:52.152 Latency(us) 00:59:52.152 [2024-12-09T10:10:53.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:59:52.152 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:59:52.152 Verification LBA range: start 0x0 length 0x4000 00:59:52.152 NVMe0n1 : 1.01 9837.17 38.43 0.00 0.00 12937.15 1346.34 14360.93 00:59:52.152 [2024-12-09T10:10:53.328Z] =================================================================================================================== 00:59:52.152 [2024-12-09T10:10:53.328Z] Total : 9837.17 38.43 0.00 0.00 12937.15 1346.34 14360.93 00:59:52.152 11:10:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:59:52.152 11:10:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:59:52.411 11:10:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:59:52.670 11:10:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:59:52.670 11:10:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:59:52.930 11:10:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:59:53.189 11:10:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2501992 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2501992 ']' 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2501992 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2501992 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2501992' 00:59:56.478 killing process with pid 2501992 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2501992 00:59:56.478 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2501992 00:59:56.737 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:59:56.737 11:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:59:56.996 rmmod nvme_tcp 00:59:56.996 rmmod nvme_fabrics 00:59:56.996 rmmod nvme_keyring 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2499481 ']' 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2499481 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2499481 ']' 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2499481 00:59:56.996 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:59:57.256 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:57.256 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2499481 00:59:57.256 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:59:57.256 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:59:57.256 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2499481' 00:59:57.256 killing process with pid 2499481 00:59:57.256 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2499481 00:59:57.256 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2499481 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:59:57.516 11:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:59.424 11:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:59:59.424 00:59:59.424 real 0m40.264s 00:59:59.424 user 2m8.762s 00:59:59.424 sys 0m9.266s 00:59:59.424 11:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:59.424 11:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:59:59.424 ************************************ 00:59:59.424 END TEST nvmf_failover 00:59:59.424 ************************************ 00:59:59.424 11:11:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:59:59.424 11:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:59:59.424 11:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:59.424 11:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:59:59.684 ************************************ 00:59:59.684 START TEST nvmf_host_discovery 00:59:59.684 ************************************ 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:59:59.684 * Looking for test storage... 00:59:59.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:59:59.684 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:59.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:59.685 --rc genhtml_branch_coverage=1 00:59:59.685 --rc genhtml_function_coverage=1 00:59:59.685 --rc genhtml_legend=1 00:59:59.685 --rc geninfo_all_blocks=1 00:59:59.685 --rc geninfo_unexecuted_blocks=1 00:59:59.685 00:59:59.685 ' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:59.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:59.685 --rc genhtml_branch_coverage=1 00:59:59.685 --rc genhtml_function_coverage=1 00:59:59.685 --rc genhtml_legend=1 00:59:59.685 --rc geninfo_all_blocks=1 00:59:59.685 --rc geninfo_unexecuted_blocks=1 00:59:59.685 00:59:59.685 ' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:59.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:59.685 --rc genhtml_branch_coverage=1 00:59:59.685 --rc genhtml_function_coverage=1 00:59:59.685 --rc genhtml_legend=1 00:59:59.685 --rc geninfo_all_blocks=1 00:59:59.685 --rc geninfo_unexecuted_blocks=1 00:59:59.685 00:59:59.685 ' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:59.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:59.685 --rc genhtml_branch_coverage=1 00:59:59.685 --rc genhtml_function_coverage=1 00:59:59.685 --rc genhtml_legend=1 00:59:59.685 --rc geninfo_all_blocks=1 00:59:59.685 --rc geninfo_unexecuted_blocks=1 00:59:59.685 00:59:59.685 ' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:59:59.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:59:59.685 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:59.945 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:59:59.945 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:59:59.945 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:59:59.945 11:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:00:06.507 Found 0000:af:00.0 (0x8086 - 0x159b) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:00:06.507 Found 0000:af:00.1 (0x8086 - 0x159b) 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:00:06.507 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:00:06.508 Found net devices under 0000:af:00.0: cvl_0_0 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:00:06.508 Found net devices under 0000:af:00.1: cvl_0_1 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:00:06.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:00:06.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 01:00:06.508 01:00:06.508 --- 10.0.0.2 ping statistics --- 01:00:06.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:06.508 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:00:06.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:00:06.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 01:00:06.508 01:00:06.508 --- 10.0.0.1 ping statistics --- 01:00:06.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:06.508 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:00:06.508 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2506825 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2506825 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2506825 ']' 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:06.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:06.766 11:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:06.766 [2024-12-09 11:11:07.781529] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:00:06.766 [2024-12-09 11:11:07.781610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:06.766 [2024-12-09 11:11:07.886012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:06.766 [2024-12-09 11:11:07.928469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:06.766 [2024-12-09 11:11:07.928515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:06.766 [2024-12-09 11:11:07.928525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:06.766 [2024-12-09 11:11:07.928534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:06.766 [2024-12-09 11:11:07.928543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:06.766 [2024-12-09 11:11:07.929037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:07.699 [2024-12-09 11:11:08.751031] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:07.699 [2024-12-09 11:11:08.763226] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:07.699 null0 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:07.699 null1 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2506916 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2506916 /tmp/host.sock 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2506916 ']' 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:00:07.699 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:07.699 11:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:07.699 [2024-12-09 11:11:08.832855] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:00:07.699 [2024-12-09 11:11:08.832914] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506916 ] 01:00:07.957 [2024-12-09 11:11:08.934810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:07.957 [2024-12-09 11:11:08.988584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 01:00:07.957 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:00:08.214 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.471 [2024-12-09 11:11:09.456985] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:08.471 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:00:08.472 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:08.729 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 01:00:08.729 11:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 01:00:09.293 [2024-12-09 11:11:10.177863] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:00:09.293 [2024-12-09 11:11:10.177889] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:00:09.293 [2024-12-09 11:11:10.177909] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:00:09.293 [2024-12-09 11:11:10.264179] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 01:00:09.550 [2024-12-09 11:11:10.479402] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 01:00:09.550 [2024-12-09 11:11:10.480441] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x20390b0:1 started. 01:00:09.550 [2024-12-09 11:11:10.482657] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:00:09.550 [2024-12-09 11:11:10.482681] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:00:09.550 [2024-12-09 11:11:10.527397] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x20390b0 was disconnected and freed. delete nvme_qpair. 01:00:09.550 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:09.550 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:00:09.550 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:00:09.550 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:00:09.550 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:00:09.550 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:00:09.550 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:00:09.550 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.550 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:09.550 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:09.808 [2024-12-09 11:11:10.902330] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2039290:1 started. 01:00:09.808 [2024-12-09 11:11:10.907869] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2039290 was disconnected and freed. delete nvme_qpair. 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:09.808 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:10.066 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:00:10.066 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:00:10.066 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:10.066 [2024-12-09 11:11:11.009131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:00:10.066 [2024-12-09 11:11:11.010167] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:00:10.066 [2024-12-09 11:11:11.010194] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:10.066 [2024-12-09 11:11:11.096452] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 01:00:10.066 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 01:00:10.323 [2024-12-09 11:11:11.396967] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 01:00:10.323 [2024-12-09 11:11:11.397016] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:00:10.323 [2024-12-09 11:11:11.397031] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:00:10.323 [2024-12-09 11:11:11.397042] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.278 [2024-12-09 11:11:12.285445] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:00:11.278 [2024-12-09 11:11:12.285475] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.278 [2024-12-09 11:11:12.293977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:00:11.278 [2024-12-09 11:11:12.294008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:11.278 [2024-12-09 11:11:12.294025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:00:11.278 [2024-12-09 11:11:12.294041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:11.278 [2024-12-09 11:11:12.294058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:00:11.278 [2024-12-09 11:11:12.294074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:11.278 [2024-12-09 11:11:12.294091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:00:11.278 [2024-12-09 11:11:12.294112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:11.278 [2024-12-09 11:11:12.294128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.278 [2024-12-09 11:11:12.303985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.278 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.278 [2024-12-09 11:11:12.314025] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:00:11.278 [2024-12-09 11:11:12.314044] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:00:11.278 [2024-12-09 11:11:12.314058] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:00:11.278 [2024-12-09 11:11:12.314068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:11.278 [2024-12-09 11:11:12.314097] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:00:11.278 [2024-12-09 11:11:12.314320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:11.278 [2024-12-09 11:11:12.314346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009690 with addr=10.0.0.2, port=4420 01:00:11.278 [2024-12-09 11:11:12.314363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.278 [2024-12-09 11:11:12.314385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.278 [2024-12-09 11:11:12.314405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:11.278 [2024-12-09 11:11:12.314419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:11.278 [2024-12-09 11:11:12.314435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:11.278 [2024-12-09 11:11:12.314449] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:00:11.278 [2024-12-09 11:11:12.314460] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:00:11.278 [2024-12-09 11:11:12.314470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:11.278 [2024-12-09 11:11:12.324131] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:00:11.278 [2024-12-09 11:11:12.324158] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:00:11.278 [2024-12-09 11:11:12.324168] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:00:11.278 [2024-12-09 11:11:12.324177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:11.278 [2024-12-09 11:11:12.324203] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:00:11.278 [2024-12-09 11:11:12.324463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:11.278 [2024-12-09 11:11:12.324486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009690 with addr=10.0.0.2, port=4420 01:00:11.278 [2024-12-09 11:11:12.324502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.278 [2024-12-09 11:11:12.324523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.279 [2024-12-09 11:11:12.324543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:11.279 [2024-12-09 11:11:12.324558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:11.279 [2024-12-09 11:11:12.324577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:11.279 [2024-12-09 11:11:12.324591] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:00:11.279 [2024-12-09 11:11:12.324601] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:00:11.279 [2024-12-09 11:11:12.324610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:11.279 [2024-12-09 11:11:12.334237] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:00:11.279 [2024-12-09 11:11:12.334258] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:00:11.279 [2024-12-09 11:11:12.334268] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:00:11.279 [2024-12-09 11:11:12.334278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:11.279 [2024-12-09 11:11:12.334304] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:00:11.279 [2024-12-09 11:11:12.334508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:11.279 [2024-12-09 11:11:12.334533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009690 with addr=10.0.0.2, port=4420 01:00:11.279 [2024-12-09 11:11:12.334549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.279 [2024-12-09 11:11:12.334570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.279 [2024-12-09 11:11:12.334590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:11.279 [2024-12-09 11:11:12.334604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:11.279 [2024-12-09 11:11:12.334619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:11.279 [2024-12-09 11:11:12.334632] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:00:11.279 [2024-12-09 11:11:12.334642] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:00:11.279 [2024-12-09 11:11:12.334657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.279 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.279 [2024-12-09 11:11:12.344339] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:00:11.279 [2024-12-09 11:11:12.344363] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:00:11.279 [2024-12-09 11:11:12.344374] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:00:11.279 [2024-12-09 11:11:12.344386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:11.279 [2024-12-09 11:11:12.344413] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:00:11.279 [2024-12-09 11:11:12.344555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:11.279 [2024-12-09 11:11:12.344579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009690 with addr=10.0.0.2, port=4420 01:00:11.279 [2024-12-09 11:11:12.344595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.279 [2024-12-09 11:11:12.344615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.279 [2024-12-09 11:11:12.344635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:11.279 [2024-12-09 11:11:12.344657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:11.279 [2024-12-09 11:11:12.344672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:11.279 [2024-12-09 11:11:12.344684] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:00:11.279 [2024-12-09 11:11:12.344694] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:00:11.279 [2024-12-09 11:11:12.344704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:11.279 [2024-12-09 11:11:12.354448] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:00:11.279 [2024-12-09 11:11:12.354469] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:00:11.279 [2024-12-09 11:11:12.354480] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:00:11.279 [2024-12-09 11:11:12.354489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:11.279 [2024-12-09 11:11:12.354516] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:00:11.279 [2024-12-09 11:11:12.354660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:11.279 [2024-12-09 11:11:12.354683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009690 with addr=10.0.0.2, port=4420 01:00:11.279 [2024-12-09 11:11:12.354699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.279 [2024-12-09 11:11:12.354718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.279 [2024-12-09 11:11:12.354749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:11.279 [2024-12-09 11:11:12.354764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:11.279 [2024-12-09 11:11:12.354778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:11.279 [2024-12-09 11:11:12.354792] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:00:11.279 [2024-12-09 11:11:12.354810] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:00:11.279 [2024-12-09 11:11:12.354819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:11.279 [2024-12-09 11:11:12.364551] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:00:11.279 [2024-12-09 11:11:12.364568] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:00:11.279 [2024-12-09 11:11:12.364578] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:00:11.279 [2024-12-09 11:11:12.364587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:11.279 [2024-12-09 11:11:12.364613] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:00:11.279 [2024-12-09 11:11:12.364792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:11.279 [2024-12-09 11:11:12.364813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009690 with addr=10.0.0.2, port=4420 01:00:11.279 [2024-12-09 11:11:12.364829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.279 [2024-12-09 11:11:12.364848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.279 [2024-12-09 11:11:12.364889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:11.279 [2024-12-09 11:11:12.364905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:11.279 [2024-12-09 11:11:12.364920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:11.279 [2024-12-09 11:11:12.364932] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:00:11.279 [2024-12-09 11:11:12.364942] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:00:11.279 [2024-12-09 11:11:12.364951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:11.279 [2024-12-09 11:11:12.374654] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:00:11.279 [2024-12-09 11:11:12.374673] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:00:11.279 [2024-12-09 11:11:12.374683] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:00:11.279 [2024-12-09 11:11:12.374693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:11.279 [2024-12-09 11:11:12.374719] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:00:11.279 [2024-12-09 11:11:12.374924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:11.279 [2024-12-09 11:11:12.374947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009690 with addr=10.0.0.2, port=4420 01:00:11.279 [2024-12-09 11:11:12.374963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.279 [2024-12-09 11:11:12.374983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.279 [2024-12-09 11:11:12.375013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:11.279 [2024-12-09 11:11:12.375028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:11.279 [2024-12-09 11:11:12.375043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:11.279 [2024-12-09 11:11:12.375059] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:00:11.280 [2024-12-09 11:11:12.375070] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:00:11.280 [2024-12-09 11:11:12.375079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:11.280 [2024-12-09 11:11:12.384753] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:00:11.280 [2024-12-09 11:11:12.384771] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:00:11.280 [2024-12-09 11:11:12.384781] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:00:11.280 [2024-12-09 11:11:12.384790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:11.280 [2024-12-09 11:11:12.384815] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:00:11.280 [2024-12-09 11:11:12.385022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:11.280 [2024-12-09 11:11:12.385044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009690 with addr=10.0.0.2, port=4420 01:00:11.280 [2024-12-09 11:11:12.385060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.280 [2024-12-09 11:11:12.385080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.280 [2024-12-09 11:11:12.385120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:11.280 [2024-12-09 11:11:12.385136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:11.280 [2024-12-09 11:11:12.385152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:11.280 [2024-12-09 11:11:12.385165] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:00:11.280 [2024-12-09 11:11:12.385175] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:00:11.280 [2024-12-09 11:11:12.385184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:11.280 [2024-12-09 11:11:12.394849] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:00:11.280 [2024-12-09 11:11:12.394867] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:00:11.280 [2024-12-09 11:11:12.394877] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:00:11.280 [2024-12-09 11:11:12.394886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:11.280 [2024-12-09 11:11:12.394911] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:00:11.280 [2024-12-09 11:11:12.395107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:11.280 [2024-12-09 11:11:12.395129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009690 with addr=10.0.0.2, port=4420 01:00:11.280 [2024-12-09 11:11:12.395145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.280 [2024-12-09 11:11:12.395166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.280 [2024-12-09 11:11:12.395197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:11.280 [2024-12-09 11:11:12.395214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:11.280 [2024-12-09 11:11:12.395233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:11.280 [2024-12-09 11:11:12.395246] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:00:11.280 [2024-12-09 11:11:12.395256] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:00:11.280 [2024-12-09 11:11:12.395267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.280 [2024-12-09 11:11:12.404944] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:00:11.280 [2024-12-09 11:11:12.404963] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:00:11.280 [2024-12-09 11:11:12.404973] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:00:11.280 [2024-12-09 11:11:12.404983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:11.280 [2024-12-09 11:11:12.405009] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:00:11.280 [2024-12-09 11:11:12.405257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:11.280 [2024-12-09 11:11:12.405280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009690 with addr=10.0.0.2, port=4420 01:00:11.280 [2024-12-09 11:11:12.405296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009690 is same with the state(6) to be set 01:00:11.280 [2024-12-09 11:11:12.405317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009690 (9): Bad file descriptor 01:00:11.280 [2024-12-09 11:11:12.405355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:11.280 [2024-12-09 11:11:12.405372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:11.280 [2024-12-09 11:11:12.405388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:11.280 [2024-12-09 11:11:12.405401] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:00:11.280 [2024-12-09 11:11:12.405410] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:00:11.280 [2024-12-09 11:11:12.405420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:00:11.280 [2024-12-09 11:11:12.412375] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 01:00:11.280 [2024-12-09 11:11:12.412402] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:00:11.280 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.593 11:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:12.617 [2024-12-09 11:11:13.716842] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:00:12.617 [2024-12-09 11:11:13.716867] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:00:12.617 [2024-12-09 11:11:13.716886] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:00:12.893 [2024-12-09 11:11:13.803147] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 01:00:12.893 [2024-12-09 11:11:13.902043] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 01:00:12.893 [2024-12-09 11:11:13.902798] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2006f40:1 started. 01:00:12.893 [2024-12-09 11:11:13.905257] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:00:12.893 [2024-12-09 11:11:13.905294] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:00:12.893 [2024-12-09 11:11:13.906968] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2006f40 was disconnected and freed. delete nvme_qpair. 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:12.893 request: 01:00:12.893 { 01:00:12.893 "name": "nvme", 01:00:12.893 "trtype": "tcp", 01:00:12.893 "traddr": "10.0.0.2", 01:00:12.893 "adrfam": "ipv4", 01:00:12.893 "trsvcid": "8009", 01:00:12.893 "hostnqn": "nqn.2021-12.io.spdk:test", 01:00:12.893 "wait_for_attach": true, 01:00:12.893 "method": "bdev_nvme_start_discovery", 01:00:12.893 "req_id": 1 01:00:12.893 } 01:00:12.893 Got JSON-RPC error response 01:00:12.893 response: 01:00:12.893 { 01:00:12.893 "code": -17, 01:00:12.893 "message": "File exists" 01:00:12.893 } 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:12.893 11:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:12.893 request: 01:00:12.893 { 01:00:12.893 "name": "nvme_second", 01:00:12.893 "trtype": "tcp", 01:00:12.893 "traddr": "10.0.0.2", 01:00:12.893 "adrfam": "ipv4", 01:00:12.893 "trsvcid": "8009", 01:00:12.893 "hostnqn": "nqn.2021-12.io.spdk:test", 01:00:12.893 "wait_for_attach": true, 01:00:12.893 "method": "bdev_nvme_start_discovery", 01:00:12.893 "req_id": 1 01:00:12.893 } 01:00:12.893 Got JSON-RPC error response 01:00:12.893 response: 01:00:12.893 { 01:00:12.893 "code": -17, 01:00:12.893 "message": "File exists" 01:00:12.893 } 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:12.893 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 01:00:12.894 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:00:12.894 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:00:12.894 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:00:12.894 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:12.894 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:00:12.894 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:13.205 11:11:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:14.212 [2024-12-09 11:11:15.164798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:14.212 [2024-12-09 11:11:15.164848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201ec40 with addr=10.0.0.2, port=8010 01:00:14.212 [2024-12-09 11:11:15.164876] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:00:14.212 [2024-12-09 11:11:15.164891] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:00:14.212 [2024-12-09 11:11:15.164904] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 01:00:15.142 [2024-12-09 11:11:16.167180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:00:15.142 [2024-12-09 11:11:16.167225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200aac0 with addr=10.0.0.2, port=8010 01:00:15.142 [2024-12-09 11:11:16.167249] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:00:15.142 [2024-12-09 11:11:16.167263] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:00:15.142 [2024-12-09 11:11:16.167276] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 01:00:16.073 [2024-12-09 11:11:17.169264] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 01:00:16.073 request: 01:00:16.073 { 01:00:16.073 "name": "nvme_second", 01:00:16.073 "trtype": "tcp", 01:00:16.073 "traddr": "10.0.0.2", 01:00:16.073 "adrfam": "ipv4", 01:00:16.073 "trsvcid": "8010", 01:00:16.073 "hostnqn": "nqn.2021-12.io.spdk:test", 01:00:16.073 "wait_for_attach": false, 01:00:16.073 "attach_timeout_ms": 3000, 01:00:16.073 "method": "bdev_nvme_start_discovery", 01:00:16.073 "req_id": 1 01:00:16.073 } 01:00:16.073 Got JSON-RPC error response 01:00:16.073 response: 01:00:16.073 { 01:00:16.073 "code": -110, 01:00:16.073 "message": "Connection timed out" 01:00:16.073 } 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2506916 01:00:16.073 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 01:00:16.074 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 01:00:16.074 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 01:00:16.074 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:00:16.074 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 01:00:16.074 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 01:00:16.074 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:00:16.074 rmmod nvme_tcp 01:00:16.332 rmmod nvme_fabrics 01:00:16.332 rmmod nvme_keyring 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2506825 ']' 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2506825 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2506825 ']' 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2506825 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506825 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506825' 01:00:16.332 killing process with pid 2506825 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2506825 01:00:16.332 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2506825 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:00:16.593 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:18.495 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:00:18.752 01:00:18.752 real 0m19.062s 01:00:18.752 user 0m22.268s 01:00:18.752 sys 0m6.875s 01:00:18.752 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:18.752 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:00:18.752 ************************************ 01:00:18.752 END TEST nvmf_host_discovery 01:00:18.752 ************************************ 01:00:18.752 11:11:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:00:18.752 11:11:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:00:18.752 11:11:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:18.752 11:11:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:00:18.752 ************************************ 01:00:18.752 START TEST nvmf_host_multipath_status 01:00:18.752 ************************************ 01:00:18.752 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:00:18.752 * Looking for test storage... 01:00:18.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 01:00:18.752 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:18.752 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 01:00:18.752 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:19.010 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:19.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:19.011 --rc genhtml_branch_coverage=1 01:00:19.011 --rc genhtml_function_coverage=1 01:00:19.011 --rc genhtml_legend=1 01:00:19.011 --rc geninfo_all_blocks=1 01:00:19.011 --rc geninfo_unexecuted_blocks=1 01:00:19.011 01:00:19.011 ' 01:00:19.011 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:19.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:19.011 --rc genhtml_branch_coverage=1 01:00:19.011 --rc genhtml_function_coverage=1 01:00:19.011 --rc genhtml_legend=1 01:00:19.011 --rc geninfo_all_blocks=1 01:00:19.011 --rc geninfo_unexecuted_blocks=1 01:00:19.011 01:00:19.011 ' 01:00:19.011 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:19.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:19.011 --rc genhtml_branch_coverage=1 01:00:19.011 --rc genhtml_function_coverage=1 01:00:19.011 --rc genhtml_legend=1 01:00:19.011 --rc geninfo_all_blocks=1 01:00:19.011 --rc geninfo_unexecuted_blocks=1 01:00:19.011 01:00:19.011 ' 01:00:19.011 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:19.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:19.011 --rc genhtml_branch_coverage=1 01:00:19.011 --rc genhtml_function_coverage=1 01:00:19.011 --rc genhtml_legend=1 01:00:19.011 --rc geninfo_all_blocks=1 01:00:19.011 --rc geninfo_unexecuted_blocks=1 01:00:19.011 01:00:19.011 ' 01:00:19.011 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:00:19.011 11:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:00:19.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 01:00:19.011 11:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:00:25.562 Found 0000:af:00.0 (0x8086 - 0x159b) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:00:25.562 Found 0000:af:00.1 (0x8086 - 0x159b) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:00:25.562 Found net devices under 0000:af:00.0: cvl_0_0 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:00:25.562 Found net devices under 0000:af:00.1: cvl_0_1 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:00:25.562 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:00:25.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:00:25.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 01:00:25.563 01:00:25.563 --- 10.0.0.2 ping statistics --- 01:00:25.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:25.563 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:00:25.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:00:25.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 01:00:25.563 01:00:25.563 --- 10.0.0.1 ping statistics --- 01:00:25.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:25.563 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2511333 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2511333 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2511333 ']' 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:25.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:25.563 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:00:25.563 [2024-12-09 11:11:26.520351] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:00:25.563 [2024-12-09 11:11:26.520428] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:25.563 [2024-12-09 11:11:26.653027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:00:25.563 [2024-12-09 11:11:26.705600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:25.563 [2024-12-09 11:11:26.705658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:25.563 [2024-12-09 11:11:26.705674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:25.563 [2024-12-09 11:11:26.705688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:25.563 [2024-12-09 11:11:26.705700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:25.563 [2024-12-09 11:11:26.707175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:25.563 [2024-12-09 11:11:26.707181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:25.820 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:25.820 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 01:00:25.820 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:00:25.820 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 01:00:25.820 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:00:25.820 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:25.820 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2511333 01:00:25.820 11:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:00:26.077 [2024-12-09 11:11:27.068075] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:26.077 11:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:00:26.334 Malloc0 01:00:26.334 11:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:00:26.591 11:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:00:26.847 11:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:00:27.104 [2024-12-09 11:11:28.195621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:27.104 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:00:27.361 [2024-12-09 11:11:28.468431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:00:27.361 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2511712 01:00:27.361 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:00:27.361 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:00:27.361 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2511712 /var/tmp/bdevperf.sock 01:00:27.361 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2511712 ']' 01:00:27.361 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:27.361 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:27.361 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:27.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:27.362 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:27.362 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:00:27.619 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:27.619 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 01:00:27.619 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:00:27.876 11:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:00:28.454 Nvme0n1 01:00:28.454 11:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:00:28.710 Nvme0n1 01:00:28.710 11:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 01:00:28.710 11:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:00:31.234 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 01:00:31.235 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 01:00:31.235 11:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:00:31.492 11:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 01:00:32.423 11:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 01:00:32.423 11:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:00:32.423 11:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:00:32.423 11:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:32.681 11:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:32.681 11:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:00:32.681 11:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:32.681 11:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:00:32.938 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:32.938 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:00:32.938 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:32.938 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:00:33.195 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:33.195 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:00:33.195 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:33.195 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:00:33.453 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:33.453 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:00:33.453 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:33.453 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:00:33.710 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:33.710 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:00:33.710 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:33.710 11:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:00:33.967 11:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:33.967 11:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 01:00:33.967 11:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:00:34.224 11:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:00:34.482 11:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 01:00:35.853 11:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 01:00:35.853 11:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:00:35.853 11:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:35.853 11:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:00:35.853 11:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:35.853 11:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:00:35.853 11:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:35.853 11:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:00:36.111 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:36.111 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:00:36.111 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:36.111 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:00:36.369 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:36.369 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:00:36.369 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:36.369 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:00:36.627 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:36.627 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:00:36.627 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:36.627 11:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:00:36.884 11:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:36.884 11:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:00:36.884 11:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:36.884 11:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:00:37.450 11:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:37.450 11:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 01:00:37.450 11:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:00:37.707 11:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 01:00:37.965 11:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 01:00:38.898 11:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 01:00:38.898 11:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:00:38.898 11:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:38.898 11:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:00:39.155 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:39.155 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:00:39.155 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:00:39.155 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:39.412 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:39.412 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:00:39.412 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:39.412 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:00:39.669 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:39.669 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:00:39.669 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:00:39.669 11:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:39.926 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:39.926 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:00:39.926 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:39.926 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:00:40.491 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:40.491 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:00:40.491 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:40.491 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:00:40.491 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:40.491 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 01:00:40.491 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:00:41.056 11:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:00:41.313 11:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 01:00:42.244 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 01:00:42.244 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:00:42.244 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:42.244 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:00:42.502 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:42.502 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:00:42.502 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:42.502 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:00:42.759 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:42.759 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:00:42.759 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:42.759 11:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:00:43.016 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:43.016 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:00:43.016 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:43.016 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:00:43.274 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:43.274 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:00:43.274 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:43.274 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:00:43.531 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:43.531 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:00:43.531 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:43.532 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:00:43.789 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:43.789 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 01:00:43.789 11:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:00:44.353 11:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:00:44.353 11:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 01:00:45.723 11:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 01:00:45.723 11:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:00:45.723 11:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:00:45.723 11:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:45.723 11:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:45.723 11:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:00:45.723 11:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:45.723 11:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:00:45.980 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:45.981 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:00:45.981 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:45.981 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:00:46.238 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:46.238 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:00:46.238 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:46.238 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:00:46.494 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:46.494 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:00:46.494 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:46.494 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:00:46.751 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:46.751 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:00:46.751 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:46.751 11:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:00:47.008 11:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:47.008 11:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 01:00:47.008 11:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:00:47.266 11:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:00:47.831 11:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 01:00:48.765 11:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 01:00:48.765 11:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:00:48.765 11:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:48.765 11:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:00:49.022 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:49.023 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:00:49.023 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:49.023 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:00:49.279 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:49.279 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:00:49.279 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:49.279 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:00:49.537 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:49.537 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:00:49.537 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:49.537 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:00:49.794 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:49.794 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:00:49.794 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:49.794 11:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:00:50.051 11:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:50.051 11:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:00:50.051 11:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:50.051 11:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:00:50.310 11:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:50.310 11:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 01:00:50.569 11:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 01:00:50.569 11:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 01:00:50.827 11:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:00:51.085 11:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 01:00:52.457 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 01:00:52.457 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:00:52.457 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:52.457 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:00:52.457 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:52.457 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:00:52.457 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:52.457 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:00:52.714 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:52.714 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:00:52.714 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:52.714 11:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:00:52.972 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:52.972 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:00:52.972 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:52.972 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:00:53.228 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:53.228 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:00:53.228 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:53.228 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:00:53.791 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:53.791 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:00:53.791 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:53.791 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:00:53.791 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:53.791 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 01:00:53.791 11:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:00:54.051 11:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:00:54.313 11:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 01:00:55.684 11:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 01:00:55.684 11:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:00:55.684 11:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:55.684 11:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:00:55.684 11:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:00:55.684 11:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:00:55.684 11:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:55.684 11:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:00:55.941 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:55.941 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:00:55.941 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:55.941 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:00:56.198 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:56.198 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:00:56.198 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:56.198 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:00:56.455 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:56.455 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:00:56.455 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:56.455 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:00:56.713 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:56.713 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:00:56.713 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:56.713 11:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:00:56.972 11:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:56.972 11:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 01:00:56.972 11:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:00:57.537 11:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 01:00:57.537 11:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 01:00:58.915 11:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 01:00:58.915 11:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:00:58.915 11:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:58.915 11:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:00:58.915 11:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:58.915 11:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:00:58.915 11:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:58.915 11:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:00:59.173 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:59.173 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:00:59.173 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:59.173 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:00:59.431 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:59.431 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:00:59.431 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:59.431 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:00:59.689 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:00:59.689 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:00:59.947 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:00:59.947 11:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:01:00.204 11:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:01:00.204 11:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:01:00.204 11:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:01:00.204 11:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:01:00.462 11:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:01:00.462 11:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 01:01:00.462 11:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:01:00.719 11:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:01:00.977 11:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 01:01:02.351 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 01:01:02.351 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:01:02.351 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:01:02.351 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:01:02.351 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:01:02.351 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:01:02.351 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:01:02.351 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:01:02.608 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:01:02.608 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:01:02.608 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:01:02.609 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:01:02.866 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:01:02.866 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:01:02.866 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:01:02.867 11:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:01:03.124 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:01:03.124 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:01:03.124 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:01:03.124 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:01:03.381 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:01:03.381 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:01:03.381 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:01:03.381 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2511712 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2511712 ']' 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2511712 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511712 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511712' 01:01:03.639 killing process with pid 2511712 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2511712 01:01:03.639 11:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2511712 01:01:03.639 { 01:01:03.639 "results": [ 01:01:03.639 { 01:01:03.639 "job": "Nvme0n1", 01:01:03.639 "core_mask": "0x4", 01:01:03.639 "workload": "verify", 01:01:03.639 "status": "terminated", 01:01:03.639 "verify_range": { 01:01:03.639 "start": 0, 01:01:03.639 "length": 16384 01:01:03.639 }, 01:01:03.639 "queue_depth": 128, 01:01:03.639 "io_size": 4096, 01:01:03.639 "runtime": 34.729283, 01:01:03.639 "iops": 9172.317205627309, 01:01:03.639 "mibps": 35.829364084481675, 01:01:03.639 "io_failed": 0, 01:01:03.639 "io_timeout": 0, 01:01:03.639 "avg_latency_us": 13933.684379480588, 01:01:03.639 "min_latency_us": 123.77043478260869, 01:01:03.639 "max_latency_us": 4026531.84 01:01:03.639 } 01:01:03.639 ], 01:01:03.639 "core_count": 1 01:01:03.639 } 01:01:03.899 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2511712 01:01:03.899 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 01:01:03.899 [2024-12-09 11:11:28.542042] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:01:03.899 [2024-12-09 11:11:28.542138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511712 ] 01:01:03.899 [2024-12-09 11:11:28.638552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:03.899 [2024-12-09 11:11:28.680571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:01:03.899 Running I/O for 90 seconds... 01:01:03.899 8137.00 IOPS, 31.79 MiB/s [2024-12-09T10:12:05.075Z] 8238.50 IOPS, 32.18 MiB/s [2024-12-09T10:12:05.075Z] 8295.00 IOPS, 32.40 MiB/s [2024-12-09T10:12:05.075Z] 8328.75 IOPS, 32.53 MiB/s [2024-12-09T10:12:05.075Z] 8344.00 IOPS, 32.59 MiB/s [2024-12-09T10:12:05.075Z] 8726.67 IOPS, 34.09 MiB/s [2024-12-09T10:12:05.075Z] 9160.29 IOPS, 35.78 MiB/s [2024-12-09T10:12:05.075Z] 9470.25 IOPS, 36.99 MiB/s [2024-12-09T10:12:05.075Z] 9707.33 IOPS, 37.92 MiB/s [2024-12-09T10:12:05.075Z] 9574.00 IOPS, 37.40 MiB/s [2024-12-09T10:12:05.075Z] 9458.00 IOPS, 36.95 MiB/s [2024-12-09T10:12:05.075Z] 9369.58 IOPS, 36.60 MiB/s [2024-12-09T10:12:05.075Z] 9294.92 IOPS, 36.31 MiB/s [2024-12-09T10:12:05.075Z] 9229.64 IOPS, 36.05 MiB/s [2024-12-09T10:12:05.075Z] 9179.93 IOPS, 35.86 MiB/s [2024-12-09T10:12:05.075Z] [2024-12-09 11:11:45.205666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.899 [2024-12-09 11:11:45.205710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:01:03.899 [2024-12-09 11:11:45.205746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.899 [2024-12-09 11:11:45.205759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:01:03.899 [2024-12-09 11:11:45.205775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.899 [2024-12-09 11:11:45.205786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:01:03.899 [2024-12-09 11:11:45.205802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.899 [2024-12-09 11:11:45.205812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:01:03.899 [2024-12-09 11:11:45.205828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.899 [2024-12-09 11:11:45.205838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:01:03.899 [2024-12-09 11:11:45.205855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.899 [2024-12-09 11:11:45.205865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:01:03.899 [2024-12-09 11:11:45.205881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.899 [2024-12-09 11:11:45.205891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:01:03.899 [2024-12-09 11:11:45.205907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.205917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.206922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.206932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.207331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.207362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.207389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.207416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.207444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.207470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.207497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.900 [2024-12-09 11:11:45.207525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.900 [2024-12-09 11:11:45.207552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.900 [2024-12-09 11:11:45.207579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.900 [2024-12-09 11:11:45.207606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.900 [2024-12-09 11:11:45.207634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.900 [2024-12-09 11:11:45.207666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.900 [2024-12-09 11:11:45.207693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:01:03.900 [2024-12-09 11:11:45.207712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.900 [2024-12-09 11:11:45.207722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.901 [2024-12-09 11:11:45.209330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.901 [2024-12-09 11:11:45.209359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.901 [2024-12-09 11:11:45.209387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.901 [2024-12-09 11:11:45.209432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.901 [2024-12-09 11:11:45.209461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.901 [2024-12-09 11:11:45.209489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.901 [2024-12-09 11:11:45.209517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.901 [2024-12-09 11:11:45.209545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.209972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.209983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:01:03.901 [2024-12-09 11:11:45.210545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.901 [2024-12-09 11:11:45.210555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.210977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.210988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:01:03.902 [2024-12-09 11:11:45.211319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:01:03.902 [2024-12-09 11:11:45.211740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.902 [2024-12-09 11:11:45.211750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:01:03.903 [2024-12-09 11:11:45.211770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:11:45.211781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:01:03.903 [2024-12-09 11:11:45.211801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:11:45.211812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:01:03.903 8694.31 IOPS, 33.96 MiB/s [2024-12-09T10:12:05.079Z] 8182.88 IOPS, 31.96 MiB/s [2024-12-09T10:12:05.079Z] 7728.28 IOPS, 30.19 MiB/s [2024-12-09T10:12:05.079Z] 7321.53 IOPS, 28.60 MiB/s [2024-12-09T10:12:05.079Z] 7437.75 IOPS, 29.05 MiB/s [2024-12-09T10:12:05.079Z] 7627.33 IOPS, 29.79 MiB/s [2024-12-09T10:12:05.079Z] 7820.05 IOPS, 30.55 MiB/s [2024-12-09T10:12:05.079Z] 8035.70 IOPS, 31.39 MiB/s [2024-12-09T10:12:05.079Z] 8232.58 IOPS, 32.16 MiB/s [2024-12-09T10:12:05.079Z] 8412.08 IOPS, 32.86 MiB/s [2024-12-09T10:12:05.079Z] 8547.54 IOPS, 33.39 MiB/s [2024-12-09T10:12:05.079Z] 8661.19 IOPS, 33.83 MiB/s [2024-12-09T10:12:05.079Z] 8769.82 IOPS, 34.26 MiB/s [2024-12-09T10:12:05.079Z] 8879.90 IOPS, 34.69 MiB/s [2024-12-09T10:12:05.079Z] 9009.77 IOPS, 35.19 MiB/s [2024-12-09T10:12:05.079Z] 9129.42 IOPS, 35.66 MiB/s [2024-12-09T10:12:05.079Z] [2024-12-09 11:12:02.045658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:12:02.045706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:01:03.903 [2024-12-09 11:12:02.045745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:12:02.045757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:01:03.903 [2024-12-09 11:12:02.045778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:12:02.045790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:01:03.903 [2024-12-09 11:12:02.046758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:12:02.046781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:01:03.903 [2024-12-09 11:12:02.046800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:12:02.046812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:01:03.903 [2024-12-09 11:12:02.046827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:12:02.046838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:01:03.903 [2024-12-09 11:12:02.046854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:12:02.046864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:01:03.903 [2024-12-09 11:12:02.046880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:12:02.046891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:01:03.903 [2024-12-09 11:12:02.046907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:03.903 [2024-12-09 11:12:02.046918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:01:03.903 9238.44 IOPS, 36.09 MiB/s [2024-12-09T10:12:05.079Z] 9212.79 IOPS, 35.99 MiB/s [2024-12-09T10:12:05.079Z] 9186.97 IOPS, 35.89 MiB/s [2024-12-09T10:12:05.079Z] Received shutdown signal, test time was about 34.729919 seconds 01:01:03.903 01:01:03.903 Latency(us) 01:01:03.903 [2024-12-09T10:12:05.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:03.903 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:01:03.903 Verification LBA range: start 0x0 length 0x4000 01:01:03.903 Nvme0n1 : 34.73 9172.32 35.83 0.00 0.00 13933.68 123.77 4026531.84 01:01:03.903 [2024-12-09T10:12:05.079Z] =================================================================================================================== 01:01:03.903 [2024-12-09T10:12:05.079Z] Total : 9172.32 35.83 0.00 0.00 13933.68 123.77 4026531.84 01:01:03.903 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:01:04.168 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 01:01:04.168 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 01:01:04.168 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 01:01:04.168 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 01:01:04.168 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 01:01:04.168 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:01:04.168 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 01:01:04.168 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 01:01:04.168 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:01:04.168 rmmod nvme_tcp 01:01:04.168 rmmod nvme_fabrics 01:01:04.168 rmmod nvme_keyring 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2511333 ']' 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2511333 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2511333 ']' 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2511333 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511333 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511333' 01:01:04.426 killing process with pid 2511333 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2511333 01:01:04.426 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2511333 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:04.683 11:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:01:07.216 01:01:07.216 real 0m48.029s 01:01:07.216 user 2m12.325s 01:01:07.216 sys 0m16.500s 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:01:07.216 ************************************ 01:01:07.216 END TEST nvmf_host_multipath_status 01:01:07.216 ************************************ 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:01:07.216 ************************************ 01:01:07.216 START TEST nvmf_discovery_remove_ifc 01:01:07.216 ************************************ 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:01:07.216 * Looking for test storage... 01:01:07.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 01:01:07.216 11:12:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 01:01:07.216 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:07.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:07.217 --rc genhtml_branch_coverage=1 01:01:07.217 --rc genhtml_function_coverage=1 01:01:07.217 --rc genhtml_legend=1 01:01:07.217 --rc geninfo_all_blocks=1 01:01:07.217 --rc geninfo_unexecuted_blocks=1 01:01:07.217 01:01:07.217 ' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:07.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:07.217 --rc genhtml_branch_coverage=1 01:01:07.217 --rc genhtml_function_coverage=1 01:01:07.217 --rc genhtml_legend=1 01:01:07.217 --rc geninfo_all_blocks=1 01:01:07.217 --rc geninfo_unexecuted_blocks=1 01:01:07.217 01:01:07.217 ' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:07.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:07.217 --rc genhtml_branch_coverage=1 01:01:07.217 --rc genhtml_function_coverage=1 01:01:07.217 --rc genhtml_legend=1 01:01:07.217 --rc geninfo_all_blocks=1 01:01:07.217 --rc geninfo_unexecuted_blocks=1 01:01:07.217 01:01:07.217 ' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:07.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:07.217 --rc genhtml_branch_coverage=1 01:01:07.217 --rc genhtml_function_coverage=1 01:01:07.217 --rc genhtml_legend=1 01:01:07.217 --rc geninfo_all_blocks=1 01:01:07.217 --rc geninfo_unexecuted_blocks=1 01:01:07.217 01:01:07.217 ' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:01:07.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 01:01:07.217 11:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 01:01:13.773 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:01:13.774 Found 0000:af:00.0 (0x8086 - 0x159b) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:01:13.774 Found 0000:af:00.1 (0x8086 - 0x159b) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:01:13.774 Found net devices under 0000:af:00.0: cvl_0_0 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:01:13.774 Found net devices under 0000:af:00.1: cvl_0_1 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:01:13.774 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:01:13.775 11:12:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:01:14.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:14.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 01:01:14.033 01:01:14.033 --- 10.0.0.2 ping statistics --- 01:01:14.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:14.033 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:01:14.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:14.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 01:01:14.033 01:01:14.033 --- 10.0.0.1 ping statistics --- 01:01:14.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:14.033 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2520301 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2520301 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2520301 ']' 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:14.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:14.033 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:14.291 [2024-12-09 11:12:15.239464] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:01:14.291 [2024-12-09 11:12:15.239527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:14.291 [2024-12-09 11:12:15.325814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:14.291 [2024-12-09 11:12:15.371994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:14.291 [2024-12-09 11:12:15.372037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:14.291 [2024-12-09 11:12:15.372050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:14.291 [2024-12-09 11:12:15.372061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:14.291 [2024-12-09 11:12:15.372069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:14.291 [2024-12-09 11:12:15.372558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:14.548 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:14.548 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 01:01:14.548 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:14.549 [2024-12-09 11:12:15.534794] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:14.549 [2024-12-09 11:12:15.542979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:01:14.549 null0 01:01:14.549 [2024-12-09 11:12:15.574952] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2520443 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2520443 /tmp/host.sock 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2520443 ']' 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:01:14.549 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:14.549 11:12:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 01:01:14.549 [2024-12-09 11:12:15.657338] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:01:14.549 [2024-12-09 11:12:15.657412] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520443 ] 01:01:14.807 [2024-12-09 11:12:15.783104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:14.807 [2024-12-09 11:12:15.838038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:15.372 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:15.372 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 01:01:15.372 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:01:15.372 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 01:01:15.372 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:15.372 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:15.372 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:15.372 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 01:01:15.372 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:15.372 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:15.630 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:15.630 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 01:01:15.630 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:15.630 11:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:16.561 [2024-12-09 11:12:17.634820] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:01:16.561 [2024-12-09 11:12:17.634847] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:01:16.561 [2024-12-09 11:12:17.634867] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:01:16.561 [2024-12-09 11:12:17.721158] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 01:01:16.819 [2024-12-09 11:12:17.821985] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 01:01:16.819 [2024-12-09 11:12:17.822925] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x103ac20:1 started. 01:01:16.819 [2024-12-09 11:12:17.825083] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:01:16.819 [2024-12-09 11:12:17.825137] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:01:16.819 [2024-12-09 11:12:17.825167] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:01:16.819 [2024-12-09 11:12:17.825187] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:01:16.819 [2024-12-09 11:12:17.825216] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:16.819 [2024-12-09 11:12:17.832073] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x103ac20 was disconnected and freed. delete nvme_qpair. 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:16.819 11:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:17.077 11:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:01:17.077 11:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:01:18.008 11:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:18.008 11:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:18.008 11:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:18.008 11:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:18.008 11:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:18.008 11:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:18.008 11:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:18.008 11:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:18.008 11:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:01:18.008 11:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:01:18.941 11:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:18.941 11:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:18.941 11:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:18.941 11:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:18.941 11:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:18.941 11:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:18.941 11:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:18.941 11:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:19.198 11:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:01:19.198 11:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:01:20.130 11:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:20.130 11:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:20.130 11:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:20.130 11:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:20.130 11:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:20.130 11:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:20.130 11:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:20.130 11:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:20.130 11:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:01:20.130 11:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:01:21.062 11:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:21.062 11:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:21.062 11:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:21.062 11:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:21.062 11:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:21.062 11:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:21.062 11:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:21.062 11:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:21.062 11:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:01:21.062 11:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:01:22.434 11:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:22.434 11:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:22.434 11:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:22.434 11:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:22.434 11:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:22.434 11:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:22.434 11:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:22.434 11:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:22.434 [2024-12-09 11:12:23.265891] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 01:01:22.434 [2024-12-09 11:12:23.265950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:01:22.434 [2024-12-09 11:12:23.265975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:22.434 [2024-12-09 11:12:23.265994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:01:22.434 [2024-12-09 11:12:23.266010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:22.434 [2024-12-09 11:12:23.266026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:01:22.434 [2024-12-09 11:12:23.266041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:22.434 [2024-12-09 11:12:23.266057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:01:22.434 [2024-12-09 11:12:23.266072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:22.434 [2024-12-09 11:12:23.266088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:01:22.434 [2024-12-09 11:12:23.266104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:22.434 [2024-12-09 11:12:23.266118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10174d0 is same with the state(6) to be set 01:01:22.434 [2024-12-09 11:12:23.275910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10174d0 (9): Bad file descriptor 01:01:22.434 11:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:01:22.434 11:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:01:22.434 [2024-12-09 11:12:23.285954] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:01:22.434 [2024-12-09 11:12:23.285975] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:01:22.434 [2024-12-09 11:12:23.285989] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:01:22.434 [2024-12-09 11:12:23.285999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:01:22.434 [2024-12-09 11:12:23.286035] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:01:23.366 11:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:23.366 11:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:23.366 11:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:23.366 11:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:23.366 11:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:23.366 11:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:23.366 11:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:23.366 [2024-12-09 11:12:24.338669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 01:01:23.366 [2024-12-09 11:12:24.338726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10174d0 with addr=10.0.0.2, port=4420 01:01:23.366 [2024-12-09 11:12:24.338749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10174d0 is same with the state(6) to be set 01:01:23.366 [2024-12-09 11:12:24.338789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10174d0 (9): Bad file descriptor 01:01:23.366 [2024-12-09 11:12:24.339247] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 01:01:23.366 [2024-12-09 11:12:24.339285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:01:23.366 [2024-12-09 11:12:24.339301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:01:23.366 [2024-12-09 11:12:24.339317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:01:23.366 [2024-12-09 11:12:24.339331] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:01:23.366 [2024-12-09 11:12:24.339342] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:01:23.366 [2024-12-09 11:12:24.339352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:01:23.366 [2024-12-09 11:12:24.339367] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:01:23.366 [2024-12-09 11:12:24.339377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:01:23.366 11:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:23.366 11:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:01:23.366 11:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:01:24.298 [2024-12-09 11:12:25.341805] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:01:24.298 [2024-12-09 11:12:25.341836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:01:24.298 [2024-12-09 11:12:25.341857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:01:24.298 [2024-12-09 11:12:25.341872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:01:24.298 [2024-12-09 11:12:25.341887] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 01:01:24.298 [2024-12-09 11:12:25.341902] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:01:24.298 [2024-12-09 11:12:25.341912] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:01:24.298 [2024-12-09 11:12:25.341922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:01:24.298 [2024-12-09 11:12:25.341953] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 01:01:24.298 [2024-12-09 11:12:25.341986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:01:24.298 [2024-12-09 11:12:25.342005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:24.298 [2024-12-09 11:12:25.342024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:01:24.298 [2024-12-09 11:12:25.342041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:24.298 [2024-12-09 11:12:25.342057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:01:24.298 [2024-12-09 11:12:25.342072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:24.298 [2024-12-09 11:12:25.342091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:01:24.298 [2024-12-09 11:12:25.342107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:24.298 [2024-12-09 11:12:25.342128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:01:24.298 [2024-12-09 11:12:25.342144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:24.298 [2024-12-09 11:12:25.342160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 01:01:24.298 [2024-12-09 11:12:25.342259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006bf0 (9): Bad file descriptor 01:01:24.298 [2024-12-09 11:12:25.343228] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 01:01:24.298 [2024-12-09 11:12:25.343247] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:01:24.298 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:01:24.555 11:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:01:25.485 11:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:25.485 11:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:25.485 11:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:25.485 11:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:25.485 11:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:25.485 11:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:25.485 11:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:25.485 11:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:25.485 11:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:01:25.485 11:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:01:26.416 [2024-12-09 11:12:27.397815] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:01:26.416 [2024-12-09 11:12:27.397842] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:01:26.416 [2024-12-09 11:12:27.397864] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:01:26.417 [2024-12-09 11:12:27.485130] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 01:01:26.674 11:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:26.674 11:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:26.674 11:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:26.674 11:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:26.674 11:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:26.674 11:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:26.674 11:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:26.674 11:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:26.674 11:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:01:26.674 11:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:01:26.674 [2024-12-09 11:12:27.707382] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 01:01:26.674 [2024-12-09 11:12:27.708171] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x10090d0:1 started. 01:01:26.674 [2024-12-09 11:12:27.709703] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:01:26.674 [2024-12-09 11:12:27.709746] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:01:26.674 [2024-12-09 11:12:27.709772] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:01:26.674 [2024-12-09 11:12:27.709791] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 01:01:26.674 [2024-12-09 11:12:27.709804] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:01:26.674 [2024-12-09 11:12:27.717133] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x10090d0 was disconnected and freed. delete nvme_qpair. 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2520443 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2520443 ']' 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2520443 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:27.606 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2520443 01:01:27.864 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:01:27.864 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:01:27.864 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2520443' 01:01:27.864 killing process with pid 2520443 01:01:27.864 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2520443 01:01:27.864 11:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2520443 01:01:27.864 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 01:01:27.864 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 01:01:27.864 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 01:01:27.864 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:01:27.864 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 01:01:27.864 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 01:01:27.864 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:01:27.864 rmmod nvme_tcp 01:01:27.864 rmmod nvme_fabrics 01:01:28.122 rmmod nvme_keyring 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2520301 ']' 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2520301 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2520301 ']' 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2520301 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2520301 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2520301' 01:01:28.122 killing process with pid 2520301 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2520301 01:01:28.122 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2520301 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:28.380 11:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:30.280 11:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:01:30.280 01:01:30.280 real 0m23.561s 01:01:30.280 user 0m28.623s 01:01:30.280 sys 0m7.324s 01:01:30.280 11:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:30.280 11:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:01:30.280 ************************************ 01:01:30.280 END TEST nvmf_discovery_remove_ifc 01:01:30.280 ************************************ 01:01:30.538 11:12:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:01:30.538 11:12:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:01:30.538 11:12:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:30.538 11:12:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:01:30.538 ************************************ 01:01:30.538 START TEST nvmf_identify_kernel_target 01:01:30.538 ************************************ 01:01:30.538 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:01:30.538 * Looking for test storage... 01:01:30.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 01:01:30.538 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:30.538 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 01:01:30.538 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:30.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:30.797 --rc genhtml_branch_coverage=1 01:01:30.797 --rc genhtml_function_coverage=1 01:01:30.797 --rc genhtml_legend=1 01:01:30.797 --rc geninfo_all_blocks=1 01:01:30.797 --rc geninfo_unexecuted_blocks=1 01:01:30.797 01:01:30.797 ' 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:30.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:30.797 --rc genhtml_branch_coverage=1 01:01:30.797 --rc genhtml_function_coverage=1 01:01:30.797 --rc genhtml_legend=1 01:01:30.797 --rc geninfo_all_blocks=1 01:01:30.797 --rc geninfo_unexecuted_blocks=1 01:01:30.797 01:01:30.797 ' 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:30.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:30.797 --rc genhtml_branch_coverage=1 01:01:30.797 --rc genhtml_function_coverage=1 01:01:30.797 --rc genhtml_legend=1 01:01:30.797 --rc geninfo_all_blocks=1 01:01:30.797 --rc geninfo_unexecuted_blocks=1 01:01:30.797 01:01:30.797 ' 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:30.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:30.797 --rc genhtml_branch_coverage=1 01:01:30.797 --rc genhtml_function_coverage=1 01:01:30.797 --rc genhtml_legend=1 01:01:30.797 --rc geninfo_all_blocks=1 01:01:30.797 --rc geninfo_unexecuted_blocks=1 01:01:30.797 01:01:30.797 ' 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:30.797 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:01:30.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 01:01:30.798 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:01:37.363 Found 0000:af:00.0 (0x8086 - 0x159b) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:01:37.363 Found 0000:af:00.1 (0x8086 - 0x159b) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:01:37.363 Found net devices under 0000:af:00.0: cvl_0_0 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:01:37.363 Found net devices under 0000:af:00.1: cvl_0_1 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:01:37.363 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:01:37.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:37.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 01:01:37.363 01:01:37.363 --- 10.0.0.2 ping statistics --- 01:01:37.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:37.364 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:01:37.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:37.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 01:01:37.364 01:01:37.364 --- 10.0.0.1 ping statistics --- 01:01:37.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:37.364 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:01:37.364 11:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:01:40.644 Waiting for block devices as requested 01:01:40.644 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 01:01:40.903 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 01:01:40.903 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 01:01:40.903 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 01:01:41.161 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 01:01:41.161 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 01:01:41.161 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 01:01:41.420 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 01:01:41.420 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 01:01:41.678 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 01:01:41.678 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 01:01:41.678 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 01:01:41.937 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 01:01:41.937 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 01:01:41.937 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 01:01:42.196 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 01:01:42.196 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 01:01:42.196 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:01:42.196 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:01:42.196 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:01:42.196 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:01:42.196 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:01:42.196 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:01:42.196 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:01:42.196 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:01:42.196 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 01:01:42.455 No valid GPT data, bailing 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -a 10.0.0.1 -t tcp -s 4420 01:01:42.455 01:01:42.455 Discovery Log Number of Records 2, Generation counter 2 01:01:42.455 =====Discovery Log Entry 0====== 01:01:42.455 trtype: tcp 01:01:42.455 adrfam: ipv4 01:01:42.455 subtype: current discovery subsystem 01:01:42.455 treq: not specified, sq flow control disable supported 01:01:42.455 portid: 1 01:01:42.455 trsvcid: 4420 01:01:42.455 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:01:42.455 traddr: 10.0.0.1 01:01:42.455 eflags: none 01:01:42.455 sectype: none 01:01:42.455 =====Discovery Log Entry 1====== 01:01:42.455 trtype: tcp 01:01:42.455 adrfam: ipv4 01:01:42.455 subtype: nvme subsystem 01:01:42.455 treq: not specified, sq flow control disable supported 01:01:42.455 portid: 1 01:01:42.455 trsvcid: 4420 01:01:42.455 subnqn: nqn.2016-06.io.spdk:testnqn 01:01:42.455 traddr: 10.0.0.1 01:01:42.455 eflags: none 01:01:42.455 sectype: none 01:01:42.455 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 01:01:42.455 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 01:01:42.714 ===================================================== 01:01:42.714 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 01:01:42.714 ===================================================== 01:01:42.714 Controller Capabilities/Features 01:01:42.714 ================================ 01:01:42.714 Vendor ID: 0000 01:01:42.714 Subsystem Vendor ID: 0000 01:01:42.714 Serial Number: e4a568dd86e2f3012481 01:01:42.715 Model Number: Linux 01:01:42.715 Firmware Version: 6.8.9-20 01:01:42.715 Recommended Arb Burst: 0 01:01:42.715 IEEE OUI Identifier: 00 00 00 01:01:42.715 Multi-path I/O 01:01:42.715 May have multiple subsystem ports: No 01:01:42.715 May have multiple controllers: No 01:01:42.715 Associated with SR-IOV VF: No 01:01:42.715 Max Data Transfer Size: Unlimited 01:01:42.715 Max Number of Namespaces: 0 01:01:42.715 Max Number of I/O Queues: 1024 01:01:42.715 NVMe Specification Version (VS): 1.3 01:01:42.715 NVMe Specification Version (Identify): 1.3 01:01:42.715 Maximum Queue Entries: 1024 01:01:42.715 Contiguous Queues Required: No 01:01:42.715 Arbitration Mechanisms Supported 01:01:42.715 Weighted Round Robin: Not Supported 01:01:42.715 Vendor Specific: Not Supported 01:01:42.715 Reset Timeout: 7500 ms 01:01:42.715 Doorbell Stride: 4 bytes 01:01:42.715 NVM Subsystem Reset: Not Supported 01:01:42.715 Command Sets Supported 01:01:42.715 NVM Command Set: Supported 01:01:42.715 Boot Partition: Not Supported 01:01:42.715 Memory Page Size Minimum: 4096 bytes 01:01:42.715 Memory Page Size Maximum: 4096 bytes 01:01:42.715 Persistent Memory Region: Not Supported 01:01:42.715 Optional Asynchronous Events Supported 01:01:42.715 Namespace Attribute Notices: Not Supported 01:01:42.715 Firmware Activation Notices: Not Supported 01:01:42.715 ANA Change Notices: Not Supported 01:01:42.715 PLE Aggregate Log Change Notices: Not Supported 01:01:42.715 LBA Status Info Alert Notices: Not Supported 01:01:42.715 EGE Aggregate Log Change Notices: Not Supported 01:01:42.715 Normal NVM Subsystem Shutdown event: Not Supported 01:01:42.715 Zone Descriptor Change Notices: Not Supported 01:01:42.715 Discovery Log Change Notices: Supported 01:01:42.715 Controller Attributes 01:01:42.715 128-bit Host Identifier: Not Supported 01:01:42.715 Non-Operational Permissive Mode: Not Supported 01:01:42.715 NVM Sets: Not Supported 01:01:42.715 Read Recovery Levels: Not Supported 01:01:42.715 Endurance Groups: Not Supported 01:01:42.715 Predictable Latency Mode: Not Supported 01:01:42.715 Traffic Based Keep ALive: Not Supported 01:01:42.715 Namespace Granularity: Not Supported 01:01:42.715 SQ Associations: Not Supported 01:01:42.715 UUID List: Not Supported 01:01:42.715 Multi-Domain Subsystem: Not Supported 01:01:42.715 Fixed Capacity Management: Not Supported 01:01:42.715 Variable Capacity Management: Not Supported 01:01:42.715 Delete Endurance Group: Not Supported 01:01:42.715 Delete NVM Set: Not Supported 01:01:42.715 Extended LBA Formats Supported: Not Supported 01:01:42.715 Flexible Data Placement Supported: Not Supported 01:01:42.715 01:01:42.715 Controller Memory Buffer Support 01:01:42.715 ================================ 01:01:42.715 Supported: No 01:01:42.715 01:01:42.715 Persistent Memory Region Support 01:01:42.715 ================================ 01:01:42.715 Supported: No 01:01:42.715 01:01:42.715 Admin Command Set Attributes 01:01:42.715 ============================ 01:01:42.715 Security Send/Receive: Not Supported 01:01:42.715 Format NVM: Not Supported 01:01:42.715 Firmware Activate/Download: Not Supported 01:01:42.715 Namespace Management: Not Supported 01:01:42.715 Device Self-Test: Not Supported 01:01:42.715 Directives: Not Supported 01:01:42.715 NVMe-MI: Not Supported 01:01:42.715 Virtualization Management: Not Supported 01:01:42.715 Doorbell Buffer Config: Not Supported 01:01:42.715 Get LBA Status Capability: Not Supported 01:01:42.715 Command & Feature Lockdown Capability: Not Supported 01:01:42.715 Abort Command Limit: 1 01:01:42.715 Async Event Request Limit: 1 01:01:42.715 Number of Firmware Slots: N/A 01:01:42.715 Firmware Slot 1 Read-Only: N/A 01:01:42.715 Firmware Activation Without Reset: N/A 01:01:42.715 Multiple Update Detection Support: N/A 01:01:42.715 Firmware Update Granularity: No Information Provided 01:01:42.715 Per-Namespace SMART Log: No 01:01:42.715 Asymmetric Namespace Access Log Page: Not Supported 01:01:42.715 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:01:42.715 Command Effects Log Page: Not Supported 01:01:42.715 Get Log Page Extended Data: Supported 01:01:42.715 Telemetry Log Pages: Not Supported 01:01:42.715 Persistent Event Log Pages: Not Supported 01:01:42.715 Supported Log Pages Log Page: May Support 01:01:42.715 Commands Supported & Effects Log Page: Not Supported 01:01:42.715 Feature Identifiers & Effects Log Page:May Support 01:01:42.715 NVMe-MI Commands & Effects Log Page: May Support 01:01:42.715 Data Area 4 for Telemetry Log: Not Supported 01:01:42.715 Error Log Page Entries Supported: 1 01:01:42.715 Keep Alive: Not Supported 01:01:42.715 01:01:42.715 NVM Command Set Attributes 01:01:42.715 ========================== 01:01:42.715 Submission Queue Entry Size 01:01:42.715 Max: 1 01:01:42.715 Min: 1 01:01:42.715 Completion Queue Entry Size 01:01:42.715 Max: 1 01:01:42.715 Min: 1 01:01:42.715 Number of Namespaces: 0 01:01:42.715 Compare Command: Not Supported 01:01:42.715 Write Uncorrectable Command: Not Supported 01:01:42.715 Dataset Management Command: Not Supported 01:01:42.715 Write Zeroes Command: Not Supported 01:01:42.715 Set Features Save Field: Not Supported 01:01:42.715 Reservations: Not Supported 01:01:42.715 Timestamp: Not Supported 01:01:42.715 Copy: Not Supported 01:01:42.715 Volatile Write Cache: Not Present 01:01:42.715 Atomic Write Unit (Normal): 1 01:01:42.715 Atomic Write Unit (PFail): 1 01:01:42.715 Atomic Compare & Write Unit: 1 01:01:42.715 Fused Compare & Write: Not Supported 01:01:42.715 Scatter-Gather List 01:01:42.715 SGL Command Set: Supported 01:01:42.715 SGL Keyed: Not Supported 01:01:42.715 SGL Bit Bucket Descriptor: Not Supported 01:01:42.715 SGL Metadata Pointer: Not Supported 01:01:42.715 Oversized SGL: Not Supported 01:01:42.715 SGL Metadata Address: Not Supported 01:01:42.715 SGL Offset: Supported 01:01:42.715 Transport SGL Data Block: Not Supported 01:01:42.715 Replay Protected Memory Block: Not Supported 01:01:42.715 01:01:42.715 Firmware Slot Information 01:01:42.715 ========================= 01:01:42.715 Active slot: 0 01:01:42.715 01:01:42.715 01:01:42.715 Error Log 01:01:42.715 ========= 01:01:42.715 01:01:42.715 Active Namespaces 01:01:42.715 ================= 01:01:42.715 Discovery Log Page 01:01:42.715 ================== 01:01:42.715 Generation Counter: 2 01:01:42.715 Number of Records: 2 01:01:42.715 Record Format: 0 01:01:42.715 01:01:42.715 Discovery Log Entry 0 01:01:42.715 ---------------------- 01:01:42.715 Transport Type: 3 (TCP) 01:01:42.715 Address Family: 1 (IPv4) 01:01:42.715 Subsystem Type: 3 (Current Discovery Subsystem) 01:01:42.715 Entry Flags: 01:01:42.715 Duplicate Returned Information: 0 01:01:42.715 Explicit Persistent Connection Support for Discovery: 0 01:01:42.715 Transport Requirements: 01:01:42.715 Secure Channel: Not Specified 01:01:42.715 Port ID: 1 (0x0001) 01:01:42.715 Controller ID: 65535 (0xffff) 01:01:42.715 Admin Max SQ Size: 32 01:01:42.715 Transport Service Identifier: 4420 01:01:42.715 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:01:42.715 Transport Address: 10.0.0.1 01:01:42.715 Discovery Log Entry 1 01:01:42.715 ---------------------- 01:01:42.715 Transport Type: 3 (TCP) 01:01:42.715 Address Family: 1 (IPv4) 01:01:42.715 Subsystem Type: 2 (NVM Subsystem) 01:01:42.715 Entry Flags: 01:01:42.715 Duplicate Returned Information: 0 01:01:42.715 Explicit Persistent Connection Support for Discovery: 0 01:01:42.715 Transport Requirements: 01:01:42.715 Secure Channel: Not Specified 01:01:42.715 Port ID: 1 (0x0001) 01:01:42.715 Controller ID: 65535 (0xffff) 01:01:42.715 Admin Max SQ Size: 32 01:01:42.715 Transport Service Identifier: 4420 01:01:42.715 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 01:01:42.715 Transport Address: 10.0.0.1 01:01:42.715 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:01:42.715 get_feature(0x01) failed 01:01:42.715 get_feature(0x02) failed 01:01:42.715 get_feature(0x04) failed 01:01:42.715 ===================================================== 01:01:42.715 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:01:42.715 ===================================================== 01:01:42.715 Controller Capabilities/Features 01:01:42.715 ================================ 01:01:42.715 Vendor ID: 0000 01:01:42.715 Subsystem Vendor ID: 0000 01:01:42.715 Serial Number: 6af9d16efdfea8b21b2f 01:01:42.715 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 01:01:42.716 Firmware Version: 6.8.9-20 01:01:42.716 Recommended Arb Burst: 6 01:01:42.716 IEEE OUI Identifier: 00 00 00 01:01:42.716 Multi-path I/O 01:01:42.716 May have multiple subsystem ports: Yes 01:01:42.716 May have multiple controllers: Yes 01:01:42.716 Associated with SR-IOV VF: No 01:01:42.716 Max Data Transfer Size: Unlimited 01:01:42.716 Max Number of Namespaces: 1024 01:01:42.716 Max Number of I/O Queues: 128 01:01:42.716 NVMe Specification Version (VS): 1.3 01:01:42.716 NVMe Specification Version (Identify): 1.3 01:01:42.716 Maximum Queue Entries: 1024 01:01:42.716 Contiguous Queues Required: No 01:01:42.716 Arbitration Mechanisms Supported 01:01:42.716 Weighted Round Robin: Not Supported 01:01:42.716 Vendor Specific: Not Supported 01:01:42.716 Reset Timeout: 7500 ms 01:01:42.716 Doorbell Stride: 4 bytes 01:01:42.716 NVM Subsystem Reset: Not Supported 01:01:42.716 Command Sets Supported 01:01:42.716 NVM Command Set: Supported 01:01:42.716 Boot Partition: Not Supported 01:01:42.716 Memory Page Size Minimum: 4096 bytes 01:01:42.716 Memory Page Size Maximum: 4096 bytes 01:01:42.716 Persistent Memory Region: Not Supported 01:01:42.716 Optional Asynchronous Events Supported 01:01:42.716 Namespace Attribute Notices: Supported 01:01:42.716 Firmware Activation Notices: Not Supported 01:01:42.716 ANA Change Notices: Supported 01:01:42.716 PLE Aggregate Log Change Notices: Not Supported 01:01:42.716 LBA Status Info Alert Notices: Not Supported 01:01:42.716 EGE Aggregate Log Change Notices: Not Supported 01:01:42.716 Normal NVM Subsystem Shutdown event: Not Supported 01:01:42.716 Zone Descriptor Change Notices: Not Supported 01:01:42.716 Discovery Log Change Notices: Not Supported 01:01:42.716 Controller Attributes 01:01:42.716 128-bit Host Identifier: Supported 01:01:42.716 Non-Operational Permissive Mode: Not Supported 01:01:42.716 NVM Sets: Not Supported 01:01:42.716 Read Recovery Levels: Not Supported 01:01:42.716 Endurance Groups: Not Supported 01:01:42.716 Predictable Latency Mode: Not Supported 01:01:42.716 Traffic Based Keep ALive: Supported 01:01:42.716 Namespace Granularity: Not Supported 01:01:42.716 SQ Associations: Not Supported 01:01:42.716 UUID List: Not Supported 01:01:42.716 Multi-Domain Subsystem: Not Supported 01:01:42.716 Fixed Capacity Management: Not Supported 01:01:42.716 Variable Capacity Management: Not Supported 01:01:42.716 Delete Endurance Group: Not Supported 01:01:42.716 Delete NVM Set: Not Supported 01:01:42.716 Extended LBA Formats Supported: Not Supported 01:01:42.716 Flexible Data Placement Supported: Not Supported 01:01:42.716 01:01:42.716 Controller Memory Buffer Support 01:01:42.716 ================================ 01:01:42.716 Supported: No 01:01:42.716 01:01:42.716 Persistent Memory Region Support 01:01:42.716 ================================ 01:01:42.716 Supported: No 01:01:42.716 01:01:42.716 Admin Command Set Attributes 01:01:42.716 ============================ 01:01:42.716 Security Send/Receive: Not Supported 01:01:42.716 Format NVM: Not Supported 01:01:42.716 Firmware Activate/Download: Not Supported 01:01:42.716 Namespace Management: Not Supported 01:01:42.716 Device Self-Test: Not Supported 01:01:42.716 Directives: Not Supported 01:01:42.716 NVMe-MI: Not Supported 01:01:42.716 Virtualization Management: Not Supported 01:01:42.716 Doorbell Buffer Config: Not Supported 01:01:42.716 Get LBA Status Capability: Not Supported 01:01:42.716 Command & Feature Lockdown Capability: Not Supported 01:01:42.716 Abort Command Limit: 4 01:01:42.716 Async Event Request Limit: 4 01:01:42.716 Number of Firmware Slots: N/A 01:01:42.716 Firmware Slot 1 Read-Only: N/A 01:01:42.716 Firmware Activation Without Reset: N/A 01:01:42.716 Multiple Update Detection Support: N/A 01:01:42.716 Firmware Update Granularity: No Information Provided 01:01:42.716 Per-Namespace SMART Log: Yes 01:01:42.716 Asymmetric Namespace Access Log Page: Supported 01:01:42.716 ANA Transition Time : 10 sec 01:01:42.716 01:01:42.716 Asymmetric Namespace Access Capabilities 01:01:42.716 ANA Optimized State : Supported 01:01:42.716 ANA Non-Optimized State : Supported 01:01:42.716 ANA Inaccessible State : Supported 01:01:42.716 ANA Persistent Loss State : Supported 01:01:42.716 ANA Change State : Supported 01:01:42.716 ANAGRPID is not changed : No 01:01:42.716 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 01:01:42.716 01:01:42.716 ANA Group Identifier Maximum : 128 01:01:42.716 Number of ANA Group Identifiers : 128 01:01:42.716 Max Number of Allowed Namespaces : 1024 01:01:42.716 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 01:01:42.716 Command Effects Log Page: Supported 01:01:42.716 Get Log Page Extended Data: Supported 01:01:42.716 Telemetry Log Pages: Not Supported 01:01:42.716 Persistent Event Log Pages: Not Supported 01:01:42.716 Supported Log Pages Log Page: May Support 01:01:42.716 Commands Supported & Effects Log Page: Not Supported 01:01:42.716 Feature Identifiers & Effects Log Page:May Support 01:01:42.716 NVMe-MI Commands & Effects Log Page: May Support 01:01:42.716 Data Area 4 for Telemetry Log: Not Supported 01:01:42.716 Error Log Page Entries Supported: 128 01:01:42.716 Keep Alive: Supported 01:01:42.716 Keep Alive Granularity: 1000 ms 01:01:42.716 01:01:42.716 NVM Command Set Attributes 01:01:42.716 ========================== 01:01:42.716 Submission Queue Entry Size 01:01:42.716 Max: 64 01:01:42.716 Min: 64 01:01:42.716 Completion Queue Entry Size 01:01:42.716 Max: 16 01:01:42.716 Min: 16 01:01:42.716 Number of Namespaces: 1024 01:01:42.716 Compare Command: Not Supported 01:01:42.716 Write Uncorrectable Command: Not Supported 01:01:42.716 Dataset Management Command: Supported 01:01:42.716 Write Zeroes Command: Supported 01:01:42.716 Set Features Save Field: Not Supported 01:01:42.716 Reservations: Not Supported 01:01:42.716 Timestamp: Not Supported 01:01:42.716 Copy: Not Supported 01:01:42.716 Volatile Write Cache: Present 01:01:42.716 Atomic Write Unit (Normal): 1 01:01:42.716 Atomic Write Unit (PFail): 1 01:01:42.716 Atomic Compare & Write Unit: 1 01:01:42.716 Fused Compare & Write: Not Supported 01:01:42.716 Scatter-Gather List 01:01:42.716 SGL Command Set: Supported 01:01:42.716 SGL Keyed: Not Supported 01:01:42.716 SGL Bit Bucket Descriptor: Not Supported 01:01:42.716 SGL Metadata Pointer: Not Supported 01:01:42.716 Oversized SGL: Not Supported 01:01:42.716 SGL Metadata Address: Not Supported 01:01:42.716 SGL Offset: Supported 01:01:42.716 Transport SGL Data Block: Not Supported 01:01:42.716 Replay Protected Memory Block: Not Supported 01:01:42.716 01:01:42.716 Firmware Slot Information 01:01:42.716 ========================= 01:01:42.716 Active slot: 0 01:01:42.716 01:01:42.716 Asymmetric Namespace Access 01:01:42.716 =========================== 01:01:42.716 Change Count : 0 01:01:42.716 Number of ANA Group Descriptors : 1 01:01:42.716 ANA Group Descriptor : 0 01:01:42.716 ANA Group ID : 1 01:01:42.716 Number of NSID Values : 1 01:01:42.716 Change Count : 0 01:01:42.716 ANA State : 1 01:01:42.716 Namespace Identifier : 1 01:01:42.716 01:01:42.716 Commands Supported and Effects 01:01:42.716 ============================== 01:01:42.716 Admin Commands 01:01:42.716 -------------- 01:01:42.716 Get Log Page (02h): Supported 01:01:42.716 Identify (06h): Supported 01:01:42.716 Abort (08h): Supported 01:01:42.716 Set Features (09h): Supported 01:01:42.716 Get Features (0Ah): Supported 01:01:42.716 Asynchronous Event Request (0Ch): Supported 01:01:42.716 Keep Alive (18h): Supported 01:01:42.716 I/O Commands 01:01:42.716 ------------ 01:01:42.716 Flush (00h): Supported 01:01:42.716 Write (01h): Supported LBA-Change 01:01:42.716 Read (02h): Supported 01:01:42.716 Write Zeroes (08h): Supported LBA-Change 01:01:42.716 Dataset Management (09h): Supported 01:01:42.716 01:01:42.716 Error Log 01:01:42.716 ========= 01:01:42.716 Entry: 0 01:01:42.716 Error Count: 0x3 01:01:42.716 Submission Queue Id: 0x0 01:01:42.716 Command Id: 0x5 01:01:42.716 Phase Bit: 0 01:01:42.716 Status Code: 0x2 01:01:42.716 Status Code Type: 0x0 01:01:42.716 Do Not Retry: 1 01:01:42.975 Error Location: 0x28 01:01:42.975 LBA: 0x0 01:01:42.975 Namespace: 0x0 01:01:42.975 Vendor Log Page: 0x0 01:01:42.975 ----------- 01:01:42.975 Entry: 1 01:01:42.975 Error Count: 0x2 01:01:42.975 Submission Queue Id: 0x0 01:01:42.975 Command Id: 0x5 01:01:42.975 Phase Bit: 0 01:01:42.975 Status Code: 0x2 01:01:42.975 Status Code Type: 0x0 01:01:42.975 Do Not Retry: 1 01:01:42.975 Error Location: 0x28 01:01:42.975 LBA: 0x0 01:01:42.975 Namespace: 0x0 01:01:42.975 Vendor Log Page: 0x0 01:01:42.975 ----------- 01:01:42.975 Entry: 2 01:01:42.975 Error Count: 0x1 01:01:42.975 Submission Queue Id: 0x0 01:01:42.975 Command Id: 0x4 01:01:42.975 Phase Bit: 0 01:01:42.975 Status Code: 0x2 01:01:42.975 Status Code Type: 0x0 01:01:42.975 Do Not Retry: 1 01:01:42.975 Error Location: 0x28 01:01:42.975 LBA: 0x0 01:01:42.976 Namespace: 0x0 01:01:42.976 Vendor Log Page: 0x0 01:01:42.976 01:01:42.976 Number of Queues 01:01:42.976 ================ 01:01:42.976 Number of I/O Submission Queues: 128 01:01:42.976 Number of I/O Completion Queues: 128 01:01:42.976 01:01:42.976 ZNS Specific Controller Data 01:01:42.976 ============================ 01:01:42.976 Zone Append Size Limit: 0 01:01:42.976 01:01:42.976 01:01:42.976 Active Namespaces 01:01:42.976 ================= 01:01:42.976 get_feature(0x05) failed 01:01:42.976 Namespace ID:1 01:01:42.976 Command Set Identifier: NVM (00h) 01:01:42.976 Deallocate: Supported 01:01:42.976 Deallocated/Unwritten Error: Not Supported 01:01:42.976 Deallocated Read Value: Unknown 01:01:42.976 Deallocate in Write Zeroes: Not Supported 01:01:42.976 Deallocated Guard Field: 0xFFFF 01:01:42.976 Flush: Supported 01:01:42.976 Reservation: Not Supported 01:01:42.976 Namespace Sharing Capabilities: Multiple Controllers 01:01:42.976 Size (in LBAs): 7814037168 (3726GiB) 01:01:42.976 Capacity (in LBAs): 7814037168 (3726GiB) 01:01:42.976 Utilization (in LBAs): 7814037168 (3726GiB) 01:01:42.976 UUID: f8e25350-6b1a-4298-9c13-0c9305e400b5 01:01:42.976 Thin Provisioning: Not Supported 01:01:42.976 Per-NS Atomic Units: Yes 01:01:42.976 Atomic Boundary Size (Normal): 0 01:01:42.976 Atomic Boundary Size (PFail): 0 01:01:42.976 Atomic Boundary Offset: 0 01:01:42.976 NGUID/EUI64 Never Reused: No 01:01:42.976 ANA group ID: 1 01:01:42.976 Namespace Write Protected: No 01:01:42.976 Number of LBA Formats: 1 01:01:42.976 Current LBA Format: LBA Format #00 01:01:42.976 LBA Format #00: Data Size: 512 Metadata Size: 0 01:01:42.976 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:01:42.976 rmmod nvme_tcp 01:01:42.976 rmmod nvme_fabrics 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:42.976 11:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:44.879 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:01:44.879 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 01:01:44.879 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:01:44.879 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 01:01:44.879 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:01:45.138 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:01:45.138 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:01:45.138 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:01:45.138 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:01:45.138 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:01:45.138 11:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:01:48.423 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 01:01:48.423 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 01:01:51.709 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 01:01:51.709 01:01:51.709 real 0m21.219s 01:01:51.709 user 0m5.560s 01:01:51.709 sys 0m9.850s 01:01:51.709 11:12:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:51.709 11:12:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 01:01:51.709 ************************************ 01:01:51.709 END TEST nvmf_identify_kernel_target 01:01:51.709 ************************************ 01:01:51.709 11:12:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 01:01:51.709 11:12:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:01:51.709 11:12:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:51.709 11:12:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:01:51.709 ************************************ 01:01:51.709 START TEST nvmf_auth_host 01:01:51.709 ************************************ 01:01:51.709 11:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 01:01:51.969 * Looking for test storage... 01:01:51.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 01:01:51.969 11:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:51.969 11:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 01:01:51.969 11:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:51.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:51.969 --rc genhtml_branch_coverage=1 01:01:51.969 --rc genhtml_function_coverage=1 01:01:51.969 --rc genhtml_legend=1 01:01:51.969 --rc geninfo_all_blocks=1 01:01:51.969 --rc geninfo_unexecuted_blocks=1 01:01:51.969 01:01:51.969 ' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:51.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:51.969 --rc genhtml_branch_coverage=1 01:01:51.969 --rc genhtml_function_coverage=1 01:01:51.969 --rc genhtml_legend=1 01:01:51.969 --rc geninfo_all_blocks=1 01:01:51.969 --rc geninfo_unexecuted_blocks=1 01:01:51.969 01:01:51.969 ' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:51.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:51.969 --rc genhtml_branch_coverage=1 01:01:51.969 --rc genhtml_function_coverage=1 01:01:51.969 --rc genhtml_legend=1 01:01:51.969 --rc geninfo_all_blocks=1 01:01:51.969 --rc geninfo_unexecuted_blocks=1 01:01:51.969 01:01:51.969 ' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:51.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:51.969 --rc genhtml_branch_coverage=1 01:01:51.969 --rc genhtml_function_coverage=1 01:01:51.969 --rc genhtml_legend=1 01:01:51.969 --rc geninfo_all_blocks=1 01:01:51.969 --rc geninfo_unexecuted_blocks=1 01:01:51.969 01:01:51.969 ' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:01:51.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:51.969 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 01:01:51.970 11:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:01:58.532 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:01:58.533 Found 0000:af:00.0 (0x8086 - 0x159b) 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:01:58.533 Found 0000:af:00.1 (0x8086 - 0x159b) 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:01:58.533 Found net devices under 0000:af:00.0: cvl_0_0 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:01:58.533 Found net devices under 0000:af:00.1: cvl_0_1 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:01:58.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:58.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 01:01:58.533 01:01:58.533 --- 10.0.0.2 ping statistics --- 01:01:58.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:58.533 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:01:58.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:58.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 01:01:58.533 01:01:58.533 --- 10.0.0.1 ping statistics --- 01:01:58.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:58.533 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2531268 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2531268 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2531268 ']' 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:58.533 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:01:59.101 11:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cd40f1ff76dce838baae9084fc041ec2 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.khB 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cd40f1ff76dce838baae9084fc041ec2 0 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cd40f1ff76dce838baae9084fc041ec2 0 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cd40f1ff76dce838baae9084fc041ec2 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.khB 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.khB 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.khB 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 01:01:59.101 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=21a84df39168f0f3d36c9f1dcb09b5cf1c868f45c8e1916e15fa05b32caacbfc 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GaY 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 21a84df39168f0f3d36c9f1dcb09b5cf1c868f45c8e1916e15fa05b32caacbfc 3 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 21a84df39168f0f3d36c9f1dcb09b5cf1c868f45c8e1916e15fa05b32caacbfc 3 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=21a84df39168f0f3d36c9f1dcb09b5cf1c868f45c8e1916e15fa05b32caacbfc 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GaY 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GaY 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.GaY 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4e6dadbeae6d7e7158e1cdb7a43aed1dbb22680cf40ad0c9 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nK8 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4e6dadbeae6d7e7158e1cdb7a43aed1dbb22680cf40ad0c9 0 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4e6dadbeae6d7e7158e1cdb7a43aed1dbb22680cf40ad0c9 0 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4e6dadbeae6d7e7158e1cdb7a43aed1dbb22680cf40ad0c9 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nK8 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nK8 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nK8 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:01:59.102 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=68885cecead941bd2970a3fe05477ec8835e02b6aa763853 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.opo 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 68885cecead941bd2970a3fe05477ec8835e02b6aa763853 2 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 68885cecead941bd2970a3fe05477ec8835e02b6aa763853 2 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=68885cecead941bd2970a3fe05477ec8835e02b6aa763853 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.opo 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.opo 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.opo 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=315d8dc3425d13a5b87b26cae7888c8f 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Mre 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 315d8dc3425d13a5b87b26cae7888c8f 1 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 315d8dc3425d13a5b87b26cae7888c8f 1 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=315d8dc3425d13a5b87b26cae7888c8f 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Mre 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Mre 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Mre 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=783b04d573636f0afd092dcaa5171857 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ifa 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 783b04d573636f0afd092dcaa5171857 1 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 783b04d573636f0afd092dcaa5171857 1 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=783b04d573636f0afd092dcaa5171857 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ifa 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ifa 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ifa 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4623b1f1b9335531f5449f2816bb047c450031b9486052b5 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.huw 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4623b1f1b9335531f5449f2816bb047c450031b9486052b5 2 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4623b1f1b9335531f5449f2816bb047c450031b9486052b5 2 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4623b1f1b9335531f5449f2816bb047c450031b9486052b5 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 01:01:59.361 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.huw 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.huw 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.huw 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d1358ddc953edaf06412a3c3d99f810a 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.s9E 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d1358ddc953edaf06412a3c3d99f810a 0 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d1358ddc953edaf06412a3c3d99f810a 0 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d1358ddc953edaf06412a3c3d99f810a 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.s9E 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.s9E 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.s9E 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fb374c562c26983ac3cba325175a3a8f8db14e6d4deaad0409e67db2eeb86773 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.E3W 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fb374c562c26983ac3cba325175a3a8f8db14e6d4deaad0409e67db2eeb86773 3 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fb374c562c26983ac3cba325175a3a8f8db14e6d4deaad0409e67db2eeb86773 3 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fb374c562c26983ac3cba325175a3a8f8db14e6d4deaad0409e67db2eeb86773 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.E3W 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.E3W 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.E3W 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2531268 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2531268 ']' 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:59.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:59.620 11:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.khB 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.GaY ]] 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GaY 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nK8 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.opo ]] 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.opo 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Mre 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ifa ]] 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ifa 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:59.879 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.huw 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.s9E ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.s9E 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.E3W 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:02:00.138 11:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:02:03.419 Waiting for block devices as requested 01:02:03.419 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 01:02:03.419 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 01:02:03.419 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 01:02:03.677 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 01:02:03.677 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 01:02:03.677 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 01:02:03.935 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 01:02:03.935 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 01:02:03.935 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 01:02:04.192 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 01:02:04.192 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 01:02:04.192 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 01:02:04.450 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 01:02:04.450 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 01:02:04.450 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 01:02:04.450 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 01:02:04.708 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 01:02:05.274 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:02:05.274 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:02:05.274 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:02:05.274 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:02:05.274 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:02:05.274 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:02:05.274 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:02:05.274 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:02:05.274 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 01:02:05.532 No valid GPT data, bailing 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -a 10.0.0.1 -t tcp -s 4420 01:02:05.532 01:02:05.532 Discovery Log Number of Records 2, Generation counter 2 01:02:05.532 =====Discovery Log Entry 0====== 01:02:05.532 trtype: tcp 01:02:05.532 adrfam: ipv4 01:02:05.532 subtype: current discovery subsystem 01:02:05.532 treq: not specified, sq flow control disable supported 01:02:05.532 portid: 1 01:02:05.532 trsvcid: 4420 01:02:05.532 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:02:05.532 traddr: 10.0.0.1 01:02:05.532 eflags: none 01:02:05.532 sectype: none 01:02:05.532 =====Discovery Log Entry 1====== 01:02:05.532 trtype: tcp 01:02:05.532 adrfam: ipv4 01:02:05.532 subtype: nvme subsystem 01:02:05.532 treq: not specified, sq flow control disable supported 01:02:05.532 portid: 1 01:02:05.532 trsvcid: 4420 01:02:05.532 subnqn: nqn.2024-02.io.spdk:cnode0 01:02:05.532 traddr: 10.0.0.1 01:02:05.532 eflags: none 01:02:05.532 sectype: none 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:02:05.532 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:05.533 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:05.533 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:05.533 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:05.533 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:05.533 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:05.533 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:05.533 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:05.791 nvme0n1 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:05.791 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.049 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:06.049 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:06.049 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.049 11:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.049 nvme0n1 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.049 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.308 nvme0n1 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.308 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.566 nvme0n1 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.566 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.825 nvme0n1 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:06.825 11:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:07.083 nvme0n1 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:07.083 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:07.084 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:07.084 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 01:02:07.084 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:07.084 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:07.084 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:07.084 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:07.084 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:07.084 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:07.084 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:07.084 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:07.342 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:07.600 nvme0n1 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:07.600 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:07.859 nvme0n1 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:07.859 11:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:07.859 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:07.859 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:07.859 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:07.859 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:07.859 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:08.117 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.118 nvme0n1 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.118 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.376 nvme0n1 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:08.376 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.377 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.377 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.635 nvme0n1 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.635 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:08.894 11:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:09.460 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:09.718 nvme0n1 01:02:09.718 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:09.718 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:09.718 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:09.718 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:09.718 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:09.718 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:09.719 11:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.285 nvme0n1 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:02:10.285 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.286 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.544 nvme0n1 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:10.544 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.545 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.803 nvme0n1 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:10.803 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:10.804 11:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:11.370 nvme0n1 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:11.370 11:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:13.273 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:13.273 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:13.273 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:13.273 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:13.274 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:13.839 nvme0n1 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:13.839 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:13.840 11:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:14.098 nvme0n1 01:02:14.098 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:14.098 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:14.098 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:14.098 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:14.098 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:14.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:14.922 nvme0n1 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:14.922 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:14.923 11:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:15.181 nvme0n1 01:02:15.181 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:15.181 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:15.181 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:15.181 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:15.181 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:15.439 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:15.440 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:15.440 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:15.440 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:15.440 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:15.440 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:15.440 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:15.440 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:15.440 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:15.440 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:15.440 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:16.006 nvme0n1 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.006 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.007 11:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:16.572 nvme0n1 01:02:16.572 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.572 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:16.572 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.572 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:16.572 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:16.572 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.830 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.831 11:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:17.397 nvme0n1 01:02:17.397 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:17.397 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:17.397 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:17.397 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:17.397 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:17.397 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:17.655 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:17.656 11:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:18.223 nvme0n1 01:02:18.223 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:18.223 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:18.223 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:18.223 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:18.223 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:18.223 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:18.483 11:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:19.050 nvme0n1 01:02:19.050 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:19.309 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:19.310 11:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.245 nvme0n1 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:20.245 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.246 nvme0n1 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.246 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.506 nvme0n1 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.506 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.780 nvme0n1 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:20.780 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:20.781 11:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.040 nvme0n1 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.040 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.300 nvme0n1 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.300 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.655 nvme0n1 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:21.655 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.656 nvme0n1 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.656 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.940 11:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.940 nvme0n1 01:02:21.940 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.940 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:21.940 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:21.940 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.940 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:21.940 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:21.940 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:21.940 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:21.940 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:21.940 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:22.221 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:22.222 nvme0n1 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:22.222 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:22.530 nvme0n1 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 01:02:22.530 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:22.531 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.133 nvme0n1 01:02:23.133 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.133 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:23.133 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:23.133 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.133 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.133 11:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.133 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.411 nvme0n1 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.411 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.707 nvme0n1 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:23.707 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:23.708 11:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:24.023 nvme0n1 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:24.023 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:24.284 nvme0n1 01:02:24.284 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:24.284 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:24.284 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:24.284 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:24.284 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:24.284 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:24.284 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:24.284 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:24.284 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:24.284 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:24.543 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:24.544 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:24.803 nvme0n1 01:02:24.803 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:24.803 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:24.803 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:24.803 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:24.803 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:24.803 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:25.062 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:25.062 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:25.062 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:25.062 11:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:25.062 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:25.631 nvme0n1 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:25.631 11:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:25.891 nvme0n1 01:02:25.891 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:25.891 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:25.891 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:25.891 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:25.891 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:25.891 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.151 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:26.410 nvme0n1 01:02:26.410 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.410 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:26.410 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:26.410 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.410 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.671 11:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:27.241 nvme0n1 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:27.241 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:27.810 nvme0n1 01:02:27.810 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:27.810 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:27.810 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:27.810 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:27.810 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:28.070 11:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:28.070 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:28.071 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:29.010 nvme0n1 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:29.010 11:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:29.579 nvme0n1 01:02:29.579 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:29.579 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:29.580 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:29.580 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:29.580 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:29.580 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:29.580 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:29.580 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:29.580 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:29.580 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:29.840 11:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:30.410 nvme0n1 01:02:30.410 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:30.410 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:30.410 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:30.410 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:30.410 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:30.410 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:30.410 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:30.410 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:30.410 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:30.410 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:30.669 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:30.670 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:30.670 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:30.670 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:30.670 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:30.670 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:30.670 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:30.670 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:30.670 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:30.670 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:30.670 11:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.239 nvme0n1 01:02:31.239 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.239 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:31.239 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:31.239 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.239 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.239 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.499 nvme0n1 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.499 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.760 nvme0n1 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:31.760 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.021 11:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.021 nvme0n1 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.021 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.281 nvme0n1 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.282 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.543 nvme0n1 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.543 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.803 nvme0n1 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:32.803 11:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.063 nvme0n1 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.063 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.322 nvme0n1 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:33.322 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.323 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.582 nvme0n1 01:02:33.582 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.582 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:33.582 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:33.582 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.582 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.582 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.582 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.583 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.843 nvme0n1 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.843 11:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:33.843 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.114 nvme0n1 01:02:34.114 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.114 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:34.114 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:34.376 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.377 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.637 nvme0n1 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:34.637 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.638 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.897 nvme0n1 01:02:34.897 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.897 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:34.897 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:34.897 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.897 11:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:34.897 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:34.898 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:35.157 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:35.416 nvme0n1 01:02:35.416 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:35.416 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:35.416 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:35.416 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:35.416 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:35.416 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:35.416 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:35.416 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:35.416 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:35.416 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:35.417 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:35.676 nvme0n1 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:35.676 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:35.677 11:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:36.245 nvme0n1 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:36.245 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:36.246 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:36.815 nvme0n1 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:36.815 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:36.816 11:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:37.386 nvme0n1 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:37.386 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:37.387 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:37.387 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:37.387 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:37.956 nvme0n1 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:37.956 11:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:38.526 nvme0n1 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q0MGYxZmY3NmRjZTgzOGJhYWU5MDg0ZmMwNDFlYzLjTVHu: 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: ]] 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjFhODRkZjM5MTY4ZjBmM2QzNmM5ZjFkY2IwOWI1Y2YxYzg2OGY0NWM4ZTE5MTZlMTVmYTA1YjMyY2FhY2JmY9Ec1wE=: 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:38.526 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:02:38.527 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:38.527 11:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:39.465 nvme0n1 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:39.465 11:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:40.034 nvme0n1 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:40.034 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:40.315 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:40.885 nvme0n1 01:02:40.885 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:40.885 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:40.885 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:40.885 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:40.885 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:40.885 11:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDYyM2IxZjFiOTMzNTUzMWY1NDQ5ZjI4MTZiYjA0N2M0NTAwMzFiOTQ4NjA1MmI10tlVoA==: 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: ]] 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDEzNThkZGM5NTNlZGFmMDY0MTJhM2MzZDk5ZjgxMGHMkXfz: 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:40.885 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:41.145 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:41.714 nvme0n1 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmIzNzRjNTYyYzI2OTgzYWMzY2JhMzI1MTc1YTNhOGY4ZGIxNGU2ZDRkZWFhZDA0MDllNjdkYjJlZWI4Njc3Mw5Qc70=: 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:02:41.714 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:41.715 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:41.974 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:02:41.974 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:41.974 11:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:42.543 nvme0n1 01:02:42.543 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:42.543 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:02:42.543 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:02:42.543 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:42.543 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:42.543 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:42.543 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:42.543 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:02:42.543 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:42.543 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:42.803 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:42.804 request: 01:02:42.804 { 01:02:42.804 "name": "nvme0", 01:02:42.804 "trtype": "tcp", 01:02:42.804 "traddr": "10.0.0.1", 01:02:42.804 "adrfam": "ipv4", 01:02:42.804 "trsvcid": "4420", 01:02:42.804 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:02:42.804 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:02:42.804 "prchk_reftag": false, 01:02:42.804 "prchk_guard": false, 01:02:42.804 "hdgst": false, 01:02:42.804 "ddgst": false, 01:02:42.804 "allow_unrecognized_csi": false, 01:02:42.804 "method": "bdev_nvme_attach_controller", 01:02:42.804 "req_id": 1 01:02:42.804 } 01:02:42.804 Got JSON-RPC error response 01:02:42.804 response: 01:02:42.804 { 01:02:42.804 "code": -5, 01:02:42.804 "message": "Input/output error" 01:02:42.804 } 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:42.804 request: 01:02:42.804 { 01:02:42.804 "name": "nvme0", 01:02:42.804 "trtype": "tcp", 01:02:42.804 "traddr": "10.0.0.1", 01:02:42.804 "adrfam": "ipv4", 01:02:42.804 "trsvcid": "4420", 01:02:42.804 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:02:42.804 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:02:42.804 "prchk_reftag": false, 01:02:42.804 "prchk_guard": false, 01:02:42.804 "hdgst": false, 01:02:42.804 "ddgst": false, 01:02:42.804 "dhchap_key": "key2", 01:02:42.804 "allow_unrecognized_csi": false, 01:02:42.804 "method": "bdev_nvme_attach_controller", 01:02:42.804 "req_id": 1 01:02:42.804 } 01:02:42.804 Got JSON-RPC error response 01:02:42.804 response: 01:02:42.804 { 01:02:42.804 "code": -5, 01:02:42.804 "message": "Input/output error" 01:02:42.804 } 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:42.804 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:43.064 11:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:43.064 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:02:43.064 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:02:43.064 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:02:43.064 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:02:43.064 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:43.064 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:02:43.064 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:43.064 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:02:43.064 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:43.065 request: 01:02:43.065 { 01:02:43.065 "name": "nvme0", 01:02:43.065 "trtype": "tcp", 01:02:43.065 "traddr": "10.0.0.1", 01:02:43.065 "adrfam": "ipv4", 01:02:43.065 "trsvcid": "4420", 01:02:43.065 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:02:43.065 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:02:43.065 "prchk_reftag": false, 01:02:43.065 "prchk_guard": false, 01:02:43.065 "hdgst": false, 01:02:43.065 "ddgst": false, 01:02:43.065 "dhchap_key": "key1", 01:02:43.065 "dhchap_ctrlr_key": "ckey2", 01:02:43.065 "allow_unrecognized_csi": false, 01:02:43.065 "method": "bdev_nvme_attach_controller", 01:02:43.065 "req_id": 1 01:02:43.065 } 01:02:43.065 Got JSON-RPC error response 01:02:43.065 response: 01:02:43.065 { 01:02:43.065 "code": -5, 01:02:43.065 "message": "Input/output error" 01:02:43.065 } 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:43.065 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:43.324 nvme0n1 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:43.324 request: 01:02:43.324 { 01:02:43.324 "name": "nvme0", 01:02:43.324 "dhchap_key": "key1", 01:02:43.324 "dhchap_ctrlr_key": "ckey2", 01:02:43.324 "method": "bdev_nvme_set_keys", 01:02:43.324 "req_id": 1 01:02:43.324 } 01:02:43.324 Got JSON-RPC error response 01:02:43.324 response: 01:02:43.324 { 01:02:43.324 "code": -13, 01:02:43.324 "message": "Permission denied" 01:02:43.324 } 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 01:02:43.324 11:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 01:02:44.704 11:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:02:44.704 11:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:02:44.704 11:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:44.704 11:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:44.704 11:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:44.704 11:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 01:02:44.704 11:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU2ZGFkYmVhZTZkN2U3MTU4ZTFjZGI3YTQzYWVkMWRiYjIyNjgwY2Y0MGFkMGM5Dg4xYg==: 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: ]] 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njg4ODVjZWNlYWQ5NDFiZDI5NzBhM2ZlMDU0NzdlYzg4MzVlMDJiNmFhNzYzODUzmAM54g==: 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:45.645 nvme0n1 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE1ZDhkYzM0MjVkMTNhNWI4N2IyNmNhZTc4ODhjOGY1+8FF: 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: ]] 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzYjA0ZDU3MzYzNmYwYWZkMDkyZGNhYTUxNzE4NTdyr2hO: 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:45.645 request: 01:02:45.645 { 01:02:45.645 "name": "nvme0", 01:02:45.645 "dhchap_key": "key2", 01:02:45.645 "dhchap_ctrlr_key": "ckey1", 01:02:45.645 "method": "bdev_nvme_set_keys", 01:02:45.645 "req_id": 1 01:02:45.645 } 01:02:45.645 Got JSON-RPC error response 01:02:45.645 response: 01:02:45.645 { 01:02:45.645 "code": -13, 01:02:45.645 "message": "Permission denied" 01:02:45.645 } 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:45.645 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:45.905 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:45.905 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 01:02:45.905 11:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:02:46.844 rmmod nvme_tcp 01:02:46.844 rmmod nvme_fabrics 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2531268 ']' 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2531268 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2531268 ']' 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2531268 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:46.844 11:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2531268 01:02:47.104 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:02:47.104 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:02:47.104 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2531268' 01:02:47.104 killing process with pid 2531268 01:02:47.104 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2531268 01:02:47.104 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2531268 01:02:47.104 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:02:47.104 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:02:47.104 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:02:47.104 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 01:02:47.364 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:02:47.364 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 01:02:47.364 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 01:02:47.364 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:02:47.364 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 01:02:47.364 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:47.364 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:47.364 11:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:02:49.273 11:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:02:52.567 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 01:02:52.567 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 01:02:52.567 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 01:02:52.567 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 01:02:52.567 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 01:02:52.567 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 01:02:52.567 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 01:02:52.567 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 01:02:52.567 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 01:02:52.827 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 01:02:52.827 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 01:02:52.827 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 01:02:52.827 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 01:02:52.827 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 01:02:52.827 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 01:02:52.827 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 01:02:56.120 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 01:02:56.120 11:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.khB /tmp/spdk.key-null.nK8 /tmp/spdk.key-sha256.Mre /tmp/spdk.key-sha384.huw /tmp/spdk.key-sha512.E3W /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 01:02:56.120 11:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:02:58.657 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 01:02:58.657 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 01:02:58.657 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 01:02:58.657 01:02:58.657 real 1m6.965s 01:02:58.657 user 0m59.634s 01:02:58.657 sys 0m14.505s 01:02:58.657 11:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:58.657 11:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:02:58.657 ************************************ 01:02:58.657 END TEST nvmf_auth_host 01:02:58.657 ************************************ 01:02:58.916 11:13:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 01:02:58.916 11:13:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 01:02:58.916 11:13:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:02:58.916 11:13:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:58.916 11:13:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:02:58.916 ************************************ 01:02:58.916 START TEST nvmf_digest 01:02:58.916 ************************************ 01:02:58.916 11:13:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 01:02:58.916 * Looking for test storage... 01:02:58.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 01:02:58.916 11:13:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:02:58.916 11:13:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 01:02:58.916 11:13:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:02:58.916 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:02:58.916 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:02:58.916 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 01:02:58.916 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 01:02:58.916 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:02:58.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:58.917 --rc genhtml_branch_coverage=1 01:02:58.917 --rc genhtml_function_coverage=1 01:02:58.917 --rc genhtml_legend=1 01:02:58.917 --rc geninfo_all_blocks=1 01:02:58.917 --rc geninfo_unexecuted_blocks=1 01:02:58.917 01:02:58.917 ' 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:02:58.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:58.917 --rc genhtml_branch_coverage=1 01:02:58.917 --rc genhtml_function_coverage=1 01:02:58.917 --rc genhtml_legend=1 01:02:58.917 --rc geninfo_all_blocks=1 01:02:58.917 --rc geninfo_unexecuted_blocks=1 01:02:58.917 01:02:58.917 ' 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:02:58.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:58.917 --rc genhtml_branch_coverage=1 01:02:58.917 --rc genhtml_function_coverage=1 01:02:58.917 --rc genhtml_legend=1 01:02:58.917 --rc geninfo_all_blocks=1 01:02:58.917 --rc geninfo_unexecuted_blocks=1 01:02:58.917 01:02:58.917 ' 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:02:58.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:58.917 --rc genhtml_branch_coverage=1 01:02:58.917 --rc genhtml_function_coverage=1 01:02:58.917 --rc genhtml_legend=1 01:02:58.917 --rc geninfo_all_blocks=1 01:02:58.917 --rc geninfo_unexecuted_blocks=1 01:02:58.917 01:02:58.917 ' 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:02:58.917 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:02:59.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 01:02:59.177 11:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:03:05.755 Found 0000:af:00.0 (0x8086 - 0x159b) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:03:05.755 Found 0000:af:00.1 (0x8086 - 0x159b) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:03:05.755 Found net devices under 0000:af:00.0: cvl_0_0 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:03:05.755 Found net devices under 0000:af:00.1: cvl_0_1 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:03:05.755 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:03:05.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:05.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 01:03:05.756 01:03:05.756 --- 10.0.0.2 ping statistics --- 01:03:05.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:05.756 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:03:05.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:05.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 01:03:05.756 01:03:05.756 --- 10.0.0.1 ping statistics --- 01:03:05.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:05.756 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:03:05.756 ************************************ 01:03:05.756 START TEST nvmf_digest_clean 01:03:05.756 ************************************ 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2544389 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2544389 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2544389 ']' 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:05.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:03:05.756 [2024-12-09 11:14:06.554286] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:05.756 [2024-12-09 11:14:06.554356] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:05.756 [2024-12-09 11:14:06.687448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:05.756 [2024-12-09 11:14:06.737232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:05.756 [2024-12-09 11:14:06.737283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:05.756 [2024-12-09 11:14:06.737298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:05.756 [2024-12-09 11:14:06.737312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:05.756 [2024-12-09 11:14:06.737324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:05.756 [2024-12-09 11:14:06.737935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:05.756 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:03:05.756 null0 01:03:05.756 [2024-12-09 11:14:06.919271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:06.016 [2024-12-09 11:14:06.943478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2544543 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2544543 /var/tmp/bperf.sock 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2544543 ']' 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:06.016 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:03:06.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:03:06.017 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:06.017 11:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:03:06.017 [2024-12-09 11:14:07.005320] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:06.017 [2024-12-09 11:14:07.005388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544543 ] 01:03:06.017 [2024-12-09 11:14:07.101068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:06.017 [2024-12-09 11:14:07.145027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:06.276 11:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:06.276 11:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:03:06.276 11:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:03:06.276 11:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:03:06.276 11:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:03:06.535 11:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:06.535 11:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:07.105 nvme0n1 01:03:07.105 11:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:03:07.105 11:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:03:07.105 Running I/O for 2 seconds... 01:03:09.425 18125.00 IOPS, 70.80 MiB/s [2024-12-09T10:14:10.601Z] 18213.00 IOPS, 71.14 MiB/s 01:03:09.425 Latency(us) 01:03:09.425 [2024-12-09T10:14:10.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:09.425 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:03:09.425 nvme0n1 : 2.00 18248.30 71.28 0.00 0.00 7007.78 2179.78 14531.90 01:03:09.425 [2024-12-09T10:14:10.601Z] =================================================================================================================== 01:03:09.425 [2024-12-09T10:14:10.601Z] Total : 18248.30 71.28 0.00 0.00 7007.78 2179.78 14531.90 01:03:09.425 { 01:03:09.425 "results": [ 01:03:09.425 { 01:03:09.425 "job": "nvme0n1", 01:03:09.425 "core_mask": "0x2", 01:03:09.425 "workload": "randread", 01:03:09.425 "status": "finished", 01:03:09.425 "queue_depth": 128, 01:03:09.425 "io_size": 4096, 01:03:09.425 "runtime": 2.003145, 01:03:09.425 "iops": 18248.304541109104, 01:03:09.425 "mibps": 71.28243961370744, 01:03:09.425 "io_failed": 0, 01:03:09.425 "io_timeout": 0, 01:03:09.425 "avg_latency_us": 7007.779909460928, 01:03:09.425 "min_latency_us": 2179.784347826087, 01:03:09.425 "max_latency_us": 14531.895652173913 01:03:09.425 } 01:03:09.425 ], 01:03:09.425 "core_count": 1 01:03:09.425 } 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:03:09.425 | select(.opcode=="crc32c") 01:03:09.425 | "\(.module_name) \(.executed)"' 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2544543 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2544543 ']' 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2544543 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:09.425 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544543 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544543' 01:03:09.685 killing process with pid 2544543 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2544543 01:03:09.685 Received shutdown signal, test time was about 2.000000 seconds 01:03:09.685 01:03:09.685 Latency(us) 01:03:09.685 [2024-12-09T10:14:10.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:09.685 [2024-12-09T10:14:10.861Z] =================================================================================================================== 01:03:09.685 [2024-12-09T10:14:10.861Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2544543 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2544948 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2544948 /var/tmp/bperf.sock 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2544948 ']' 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:03:09.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:09.685 11:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:03:09.945 [2024-12-09 11:14:10.870989] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:09.945 [2024-12-09 11:14:10.871064] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544948 ] 01:03:09.945 I/O size of 131072 is greater than zero copy threshold (65536). 01:03:09.945 Zero copy mechanism will not be used. 01:03:09.945 [2024-12-09 11:14:10.968727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:09.945 [2024-12-09 11:14:11.011286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:10.902 11:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:10.902 11:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:03:10.902 11:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:03:10.902 11:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:03:10.902 11:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:03:11.162 11:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:11.162 11:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:11.421 nvme0n1 01:03:11.421 11:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:03:11.421 11:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:03:11.681 I/O size of 131072 is greater than zero copy threshold (65536). 01:03:11.681 Zero copy mechanism will not be used. 01:03:11.681 Running I/O for 2 seconds... 01:03:13.558 4064.00 IOPS, 508.00 MiB/s [2024-12-09T10:14:14.734Z] 4296.00 IOPS, 537.00 MiB/s 01:03:13.558 Latency(us) 01:03:13.558 [2024-12-09T10:14:14.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:13.558 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:03:13.558 nvme0n1 : 2.00 4300.85 537.61 0.00 0.00 3717.60 2436.23 11682.50 01:03:13.558 [2024-12-09T10:14:14.734Z] =================================================================================================================== 01:03:13.558 [2024-12-09T10:14:14.734Z] Total : 4300.85 537.61 0.00 0.00 3717.60 2436.23 11682.50 01:03:13.558 { 01:03:13.558 "results": [ 01:03:13.558 { 01:03:13.558 "job": "nvme0n1", 01:03:13.558 "core_mask": "0x2", 01:03:13.558 "workload": "randread", 01:03:13.558 "status": "finished", 01:03:13.558 "queue_depth": 16, 01:03:13.558 "io_size": 131072, 01:03:13.558 "runtime": 2.001465, 01:03:13.558 "iops": 4300.849627647748, 01:03:13.558 "mibps": 537.6062034559685, 01:03:13.558 "io_failed": 0, 01:03:13.558 "io_timeout": 0, 01:03:13.559 "avg_latency_us": 3717.6008016809437, 01:03:13.559 "min_latency_us": 2436.229565217391, 01:03:13.559 "max_latency_us": 11682.504347826087 01:03:13.559 } 01:03:13.559 ], 01:03:13.559 "core_count": 1 01:03:13.559 } 01:03:13.559 11:14:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:03:13.559 11:14:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:03:13.559 11:14:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:03:13.559 11:14:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:03:13.559 11:14:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:03:13.559 | select(.opcode=="crc32c") 01:03:13.559 | "\(.module_name) \(.executed)"' 01:03:14.127 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2544948 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2544948 ']' 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2544948 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544948 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544948' 01:03:14.128 killing process with pid 2544948 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2544948 01:03:14.128 Received shutdown signal, test time was about 2.000000 seconds 01:03:14.128 01:03:14.128 Latency(us) 01:03:14.128 [2024-12-09T10:14:15.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:14.128 [2024-12-09T10:14:15.304Z] =================================================================================================================== 01:03:14.128 [2024-12-09T10:14:15.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2544948 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:03:14.128 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2545561 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2545561 /var/tmp/bperf.sock 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2545561 ']' 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:03:14.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:14.129 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:03:14.388 [2024-12-09 11:14:15.347256] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:14.388 [2024-12-09 11:14:15.347331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545561 ] 01:03:14.388 [2024-12-09 11:14:15.443311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:14.388 [2024-12-09 11:14:15.486783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:14.647 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:14.647 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:03:14.647 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:03:14.647 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:03:14.647 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:03:14.907 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:14.907 11:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:15.166 nvme0n1 01:03:15.166 11:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:03:15.166 11:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:03:15.425 Running I/O for 2 seconds... 01:03:17.299 19051.00 IOPS, 74.42 MiB/s [2024-12-09T10:14:18.475Z] 19177.50 IOPS, 74.91 MiB/s 01:03:17.299 Latency(us) 01:03:17.299 [2024-12-09T10:14:18.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:17.299 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:03:17.299 nvme0n1 : 2.01 19192.73 74.97 0.00 0.00 6663.45 3105.84 11739.49 01:03:17.299 [2024-12-09T10:14:18.475Z] =================================================================================================================== 01:03:17.299 [2024-12-09T10:14:18.475Z] Total : 19192.73 74.97 0.00 0.00 6663.45 3105.84 11739.49 01:03:17.299 { 01:03:17.299 "results": [ 01:03:17.299 { 01:03:17.299 "job": "nvme0n1", 01:03:17.299 "core_mask": "0x2", 01:03:17.299 "workload": "randwrite", 01:03:17.299 "status": "finished", 01:03:17.299 "queue_depth": 128, 01:03:17.299 "io_size": 4096, 01:03:17.299 "runtime": 2.005082, 01:03:17.299 "iops": 19192.731269843327, 01:03:17.299 "mibps": 74.9716065228255, 01:03:17.299 "io_failed": 0, 01:03:17.299 "io_timeout": 0, 01:03:17.299 "avg_latency_us": 6663.454982019164, 01:03:17.299 "min_latency_us": 3105.8365217391306, 01:03:17.299 "max_latency_us": 11739.492173913044 01:03:17.299 } 01:03:17.299 ], 01:03:17.299 "core_count": 1 01:03:17.299 } 01:03:17.299 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:03:17.299 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:03:17.299 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:03:17.299 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:03:17.299 | select(.opcode=="crc32c") 01:03:17.299 | "\(.module_name) \(.executed)"' 01:03:17.299 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:03:17.559 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:03:17.559 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:03:17.559 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:03:17.559 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:03:17.559 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2545561 01:03:17.559 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2545561 ']' 01:03:17.559 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2545561 01:03:17.559 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:03:17.559 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:17.559 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2545561 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2545561' 01:03:17.819 killing process with pid 2545561 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2545561 01:03:17.819 Received shutdown signal, test time was about 2.000000 seconds 01:03:17.819 01:03:17.819 Latency(us) 01:03:17.819 [2024-12-09T10:14:18.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:17.819 [2024-12-09T10:14:18.995Z] =================================================================================================================== 01:03:17.819 [2024-12-09T10:14:18.995Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2545561 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2546021 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2546021 /var/tmp/bperf.sock 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2546021 ']' 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:03:17.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:17.819 11:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:03:18.079 [2024-12-09 11:14:19.023026] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:18.079 [2024-12-09 11:14:19.023093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546021 ] 01:03:18.079 I/O size of 131072 is greater than zero copy threshold (65536). 01:03:18.079 Zero copy mechanism will not be used. 01:03:18.079 [2024-12-09 11:14:19.105724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:18.079 [2024-12-09 11:14:19.149720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:18.079 11:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:18.079 11:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:03:18.079 11:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:03:18.079 11:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:03:18.079 11:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:03:18.339 11:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:18.339 11:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:18.599 nvme0n1 01:03:18.599 11:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:03:18.599 11:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:03:18.859 I/O size of 131072 is greater than zero copy threshold (65536). 01:03:18.859 Zero copy mechanism will not be used. 01:03:18.859 Running I/O for 2 seconds... 01:03:20.737 4623.00 IOPS, 577.88 MiB/s [2024-12-09T10:14:21.913Z] 4610.50 IOPS, 576.31 MiB/s 01:03:20.737 Latency(us) 01:03:20.737 [2024-12-09T10:14:21.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:20.737 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:03:20.737 nvme0n1 : 2.00 4610.84 576.36 0.00 0.00 3464.93 2336.50 10143.83 01:03:20.737 [2024-12-09T10:14:21.913Z] =================================================================================================================== 01:03:20.737 [2024-12-09T10:14:21.913Z] Total : 4610.84 576.36 0.00 0.00 3464.93 2336.50 10143.83 01:03:20.737 { 01:03:20.737 "results": [ 01:03:20.737 { 01:03:20.737 "job": "nvme0n1", 01:03:20.737 "core_mask": "0x2", 01:03:20.737 "workload": "randwrite", 01:03:20.737 "status": "finished", 01:03:20.737 "queue_depth": 16, 01:03:20.737 "io_size": 131072, 01:03:20.737 "runtime": 2.004407, 01:03:20.737 "iops": 4610.840014029087, 01:03:20.737 "mibps": 576.3550017536359, 01:03:20.737 "io_failed": 0, 01:03:20.737 "io_timeout": 0, 01:03:20.737 "avg_latency_us": 3464.933974389131, 01:03:20.737 "min_latency_us": 2336.5008695652173, 01:03:20.737 "max_latency_us": 10143.83304347826 01:03:20.737 } 01:03:20.737 ], 01:03:20.737 "core_count": 1 01:03:20.737 } 01:03:20.737 11:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:03:20.737 11:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:03:20.737 11:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:03:20.737 11:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:03:20.737 | select(.opcode=="crc32c") 01:03:20.737 | "\(.module_name) \(.executed)"' 01:03:20.737 11:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2546021 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2546021 ']' 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2546021 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2546021 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2546021' 01:03:20.997 killing process with pid 2546021 01:03:20.997 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2546021 01:03:20.997 Received shutdown signal, test time was about 2.000000 seconds 01:03:20.997 01:03:20.997 Latency(us) 01:03:20.997 [2024-12-09T10:14:22.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:20.997 [2024-12-09T10:14:22.173Z] =================================================================================================================== 01:03:20.997 [2024-12-09T10:14:22.173Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:21.257 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2546021 01:03:21.257 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2544389 01:03:21.257 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2544389 ']' 01:03:21.257 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2544389 01:03:21.257 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:03:21.257 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:21.257 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544389 01:03:21.516 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:21.516 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:21.516 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544389' 01:03:21.516 killing process with pid 2544389 01:03:21.516 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2544389 01:03:21.516 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2544389 01:03:21.516 01:03:21.516 real 0m16.199s 01:03:21.516 user 0m31.828s 01:03:21.516 sys 0m5.337s 01:03:21.516 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:21.516 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:03:21.516 ************************************ 01:03:21.516 END TEST nvmf_digest_clean 01:03:21.516 ************************************ 01:03:21.775 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:03:21.776 ************************************ 01:03:21.776 START TEST nvmf_digest_error 01:03:21.776 ************************************ 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2546589 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2546589 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2546589 ']' 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:21.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:21.776 11:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:21.776 [2024-12-09 11:14:22.837087] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:21.776 [2024-12-09 11:14:22.837159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:22.036 [2024-12-09 11:14:22.970163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:22.036 [2024-12-09 11:14:23.022132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:22.036 [2024-12-09 11:14:23.022178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:22.036 [2024-12-09 11:14:23.022193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:22.036 [2024-12-09 11:14:23.022208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:22.036 [2024-12-09 11:14:23.022220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:22.036 [2024-12-09 11:14:23.022850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:22.036 [2024-12-09 11:14:23.115526] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.036 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:22.295 null0 01:03:22.295 [2024-12-09 11:14:23.231143] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:22.296 [2024-12-09 11:14:23.255368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2546608 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2546608 /var/tmp/bperf.sock 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2546608 ']' 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:03:22.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:22.296 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:22.296 [2024-12-09 11:14:23.318303] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:22.296 [2024-12-09 11:14:23.318372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546608 ] 01:03:22.296 [2024-12-09 11:14:23.413405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:22.296 [2024-12-09 11:14:23.455521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:22.555 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:22.555 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:03:22.555 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:03:22.555 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:03:22.815 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:03:22.815 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.815 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:22.815 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.815 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:22.815 11:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:23.073 nvme0n1 01:03:23.073 11:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:03:23.073 11:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:23.074 11:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:23.074 11:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:23.074 11:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:03:23.074 11:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:03:23.333 Running I/O for 2 seconds... 01:03:23.333 [2024-12-09 11:14:24.365015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.365052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.365067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.333 [2024-12-09 11:14:24.379232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.379259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.379272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.333 [2024-12-09 11:14:24.392848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.392873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.392885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.333 [2024-12-09 11:14:24.409391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.409415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.409428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.333 [2024-12-09 11:14:24.425154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.425178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.425190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.333 [2024-12-09 11:14:24.437833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.437855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.437867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.333 [2024-12-09 11:14:24.448976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.448999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.449011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.333 [2024-12-09 11:14:24.463184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.463212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.463224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.333 [2024-12-09 11:14:24.478814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.478837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.478849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.333 [2024-12-09 11:14:24.493352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.493377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.493389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.333 [2024-12-09 11:14:24.507758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.333 [2024-12-09 11:14:24.507781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.333 [2024-12-09 11:14:24.507793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.523911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.523935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.523946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.537961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.537985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.537996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.552625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.552655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.552667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.565624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.565651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.565663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.580596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.580620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.580632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.594302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.594327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.594338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.609370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.609395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.609406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.619406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.619429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.619440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.635197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.635221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.635233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.649072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.649097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.649109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.663442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.663464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.663476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.679713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.679736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.679748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.694106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.694128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.694140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.707933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.707959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.707978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.721719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.721742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.721754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.735264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.735287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.735298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.750246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.750270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.750282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.593 [2024-12-09 11:14:24.764370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.593 [2024-12-09 11:14:24.764393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.593 [2024-12-09 11:14:24.764404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.777963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.777986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.777998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.792018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.792042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.792053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.805353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.805375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.805387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.819273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.819296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.819309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.833916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.833942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.833954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.847854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.847878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.847890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.862684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.862707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.862719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.876396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.876421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.876432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.890240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.890264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.890276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.901327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.901352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.901363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.915958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.915983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.915996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.929598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.929622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.929633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.945044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.945068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.945086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.961334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.961358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.961370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.976843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.976867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.976879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:24.991552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:24.991576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:24.991589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:25.007078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:25.007103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:25.007114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:23.854 [2024-12-09 11:14:25.020874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:23.854 [2024-12-09 11:14:25.020897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:23.854 [2024-12-09 11:14:25.020909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.113 [2024-12-09 11:14:25.036222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.113 [2024-12-09 11:14:25.036247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.036258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.050370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.050395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.050407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.064027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.064052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.064064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.079359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.079389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.079400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.094289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.094315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.094327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.105373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.105397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.105409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.119820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.119845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.119857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.134602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.134626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.134638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.148551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.148577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.148589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.162442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.162465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.162477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.176329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.176353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.176365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.190096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.190120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.190133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.206101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.206126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.206138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.220589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.220613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.220625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.235897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.235922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.235934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.251152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.251177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.251189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.265490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.265514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.265526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.114 [2024-12-09 11:14:25.275396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.114 [2024-12-09 11:14:25.275421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.114 [2024-12-09 11:14:25.275432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.290444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.290470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.290482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.305927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.305951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.305963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.320605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.320629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.320650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.335116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.335139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.335151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 17744.00 IOPS, 69.31 MiB/s [2024-12-09T10:14:25.549Z] [2024-12-09 11:14:25.349574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.349599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.349610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.363838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.363862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.363874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.378056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.378080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.378091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.393925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.393949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.393961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.408493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.408517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.408528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.423899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.423923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.423935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.435222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.435245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.435257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.450306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.373 [2024-12-09 11:14:25.450330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.373 [2024-12-09 11:14:25.450342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.373 [2024-12-09 11:14:25.465350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.374 [2024-12-09 11:14:25.465374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.374 [2024-12-09 11:14:25.465385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.374 [2024-12-09 11:14:25.479688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.374 [2024-12-09 11:14:25.479712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.374 [2024-12-09 11:14:25.479723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.374 [2024-12-09 11:14:25.493287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.374 [2024-12-09 11:14:25.493311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.374 [2024-12-09 11:14:25.493323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.374 [2024-12-09 11:14:25.507436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.374 [2024-12-09 11:14:25.507460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.374 [2024-12-09 11:14:25.507472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.374 [2024-12-09 11:14:25.522147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.374 [2024-12-09 11:14:25.522171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.374 [2024-12-09 11:14:25.522182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.374 [2024-12-09 11:14:25.536597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.374 [2024-12-09 11:14:25.536621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.374 [2024-12-09 11:14:25.536632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.633 [2024-12-09 11:14:25.551199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.633 [2024-12-09 11:14:25.551224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.633 [2024-12-09 11:14:25.551236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.633 [2024-12-09 11:14:25.561769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.633 [2024-12-09 11:14:25.561793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.633 [2024-12-09 11:14:25.561809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.633 [2024-12-09 11:14:25.575653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.633 [2024-12-09 11:14:25.575676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.633 [2024-12-09 11:14:25.575688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.633 [2024-12-09 11:14:25.592322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.633 [2024-12-09 11:14:25.592346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.633 [2024-12-09 11:14:25.592357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.633 [2024-12-09 11:14:25.605804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.633 [2024-12-09 11:14:25.605827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.633 [2024-12-09 11:14:25.605839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.633 [2024-12-09 11:14:25.621597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.633 [2024-12-09 11:14:25.621622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.633 [2024-12-09 11:14:25.621634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.633 [2024-12-09 11:14:25.637162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.633 [2024-12-09 11:14:25.637185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.633 [2024-12-09 11:14:25.637198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.633 [2024-12-09 11:14:25.650534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.633 [2024-12-09 11:14:25.650558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.650570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.634 [2024-12-09 11:14:25.666080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.634 [2024-12-09 11:14:25.666103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.666114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.634 [2024-12-09 11:14:25.679904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.634 [2024-12-09 11:14:25.679928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.679940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.634 [2024-12-09 11:14:25.693762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.634 [2024-12-09 11:14:25.693790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.693802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.634 [2024-12-09 11:14:25.707618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.634 [2024-12-09 11:14:25.707641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.707658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.634 [2024-12-09 11:14:25.721432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.634 [2024-12-09 11:14:25.721455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.721467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.634 [2024-12-09 11:14:25.735324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.634 [2024-12-09 11:14:25.735347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.735359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.634 [2024-12-09 11:14:25.749103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.634 [2024-12-09 11:14:25.749127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.749139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.634 [2024-12-09 11:14:25.764662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.634 [2024-12-09 11:14:25.764686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.764698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.634 [2024-12-09 11:14:25.779996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.634 [2024-12-09 11:14:25.780019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.780031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.634 [2024-12-09 11:14:25.794738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.634 [2024-12-09 11:14:25.794763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.634 [2024-12-09 11:14:25.794774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.894 [2024-12-09 11:14:25.809269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.894 [2024-12-09 11:14:25.809295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.894 [2024-12-09 11:14:25.809307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.894 [2024-12-09 11:14:25.823747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.894 [2024-12-09 11:14:25.823771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.894 [2024-12-09 11:14:25.823783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.894 [2024-12-09 11:14:25.837293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.894 [2024-12-09 11:14:25.837316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.894 [2024-12-09 11:14:25.837327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.894 [2024-12-09 11:14:25.852490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.894 [2024-12-09 11:14:25.852515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.894 [2024-12-09 11:14:25.852526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.894 [2024-12-09 11:14:25.866440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.894 [2024-12-09 11:14:25.866464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.894 [2024-12-09 11:14:25.866476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.894 [2024-12-09 11:14:25.880741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.894 [2024-12-09 11:14:25.880764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.894 [2024-12-09 11:14:25.880776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.894 [2024-12-09 11:14:25.895553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:25.895576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:25.895588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:25.909163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:25.909186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:25.909198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:25.922805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:25.922827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:25.922839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:25.935211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:25.935234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:25.935249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:25.949330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:25.949354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:25.949365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:25.963149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:25.963173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:25.963184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:25.976935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:25.976958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:25.976970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:25.990613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:25.990637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:25.990655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:26.005080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:26.005104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:26.005116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:26.020529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:26.020555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:26.020566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:26.036314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:26.036339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:26.036350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:26.051733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:26.051757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:26.051769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:24.895 [2024-12-09 11:14:26.066857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:24.895 [2024-12-09 11:14:26.066884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:24.895 [2024-12-09 11:14:26.066896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.080958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.080983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.080995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.095802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.095826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.095838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.109689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.109712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.109723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.123398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.123421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.123433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.137145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.137168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.137180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.150683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.150707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.150719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.163234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.163259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.163271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.178356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.178381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.178398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.194085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.194110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.194122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.208677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.208702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.208715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.223613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.223638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.223656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.238609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.238634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.238651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.254077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.254100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.254112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.267963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.267986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.267998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.281597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.281620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.281632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.297916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.297939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.155 [2024-12-09 11:14:26.297951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.155 [2024-12-09 11:14:26.311207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.155 [2024-12-09 11:14:26.311236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.156 [2024-12-09 11:14:26.311248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.156 [2024-12-09 11:14:26.325866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.156 [2024-12-09 11:14:26.325892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.156 [2024-12-09 11:14:26.325903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.415 [2024-12-09 11:14:26.338397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.415 [2024-12-09 11:14:26.338421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.415 [2024-12-09 11:14:26.338433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.415 17813.50 IOPS, 69.58 MiB/s [2024-12-09T10:14:26.591Z] [2024-12-09 11:14:26.351184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1811720) 01:03:25.415 [2024-12-09 11:14:26.351209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:25.415 [2024-12-09 11:14:26.351222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:25.415 01:03:25.415 Latency(us) 01:03:25.415 [2024-12-09T10:14:26.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:25.415 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:03:25.415 nvme0n1 : 2.00 17839.59 69.69 0.00 0.00 7167.38 2792.40 22909.11 01:03:25.415 [2024-12-09T10:14:26.591Z] =================================================================================================================== 01:03:25.415 [2024-12-09T10:14:26.591Z] Total : 17839.59 69.69 0.00 0.00 7167.38 2792.40 22909.11 01:03:25.415 { 01:03:25.415 "results": [ 01:03:25.415 { 01:03:25.415 "job": "nvme0n1", 01:03:25.415 "core_mask": "0x2", 01:03:25.415 "workload": "randread", 01:03:25.415 "status": "finished", 01:03:25.415 "queue_depth": 128, 01:03:25.415 "io_size": 4096, 01:03:25.415 "runtime": 2.00425, 01:03:25.415 "iops": 17839.59086940252, 01:03:25.415 "mibps": 69.6859018336036, 01:03:25.415 "io_failed": 0, 01:03:25.415 "io_timeout": 0, 01:03:25.415 "avg_latency_us": 7167.3832328953695, 01:03:25.415 "min_latency_us": 2792.4034782608696, 01:03:25.415 "max_latency_us": 22909.106086956523 01:03:25.415 } 01:03:25.415 ], 01:03:25.415 "core_count": 1 01:03:25.415 } 01:03:25.415 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:03:25.415 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:03:25.415 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:03:25.415 | .driver_specific 01:03:25.415 | .nvme_error 01:03:25.415 | .status_code 01:03:25.415 | .command_transient_transport_error' 01:03:25.415 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 140 > 0 )) 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2546608 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2546608 ']' 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2546608 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2546608 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2546608' 01:03:25.675 killing process with pid 2546608 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2546608 01:03:25.675 Received shutdown signal, test time was about 2.000000 seconds 01:03:25.675 01:03:25.675 Latency(us) 01:03:25.675 [2024-12-09T10:14:26.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:25.675 [2024-12-09T10:14:26.851Z] =================================================================================================================== 01:03:25.675 [2024-12-09T10:14:26.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:25.675 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2546608 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2547144 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2547144 /var/tmp/bperf.sock 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2547144 ']' 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:03:25.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:25.935 11:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:25.935 [2024-12-09 11:14:26.990841] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:25.935 [2024-12-09 11:14:26.990921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2547144 ] 01:03:25.935 I/O size of 131072 is greater than zero copy threshold (65536). 01:03:25.935 Zero copy mechanism will not be used. 01:03:25.935 [2024-12-09 11:14:27.087150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:26.194 [2024-12-09 11:14:27.127657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:26.194 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:26.194 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:03:26.194 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:03:26.194 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:03:26.454 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:03:26.454 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:26.454 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:26.454 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:26.454 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:26.454 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:27.023 nvme0n1 01:03:27.023 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:03:27.023 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:27.023 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:27.023 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:27.023 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:03:27.023 11:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:03:27.023 I/O size of 131072 is greater than zero copy threshold (65536). 01:03:27.023 Zero copy mechanism will not be used. 01:03:27.023 Running I/O for 2 seconds... 01:03:27.023 [2024-12-09 11:14:28.124447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.023 [2024-12-09 11:14:28.124496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.023 [2024-12-09 11:14:28.124514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.023 [2024-12-09 11:14:28.131946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.023 [2024-12-09 11:14:28.131975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.023 [2024-12-09 11:14:28.131989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.023 [2024-12-09 11:14:28.140256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.023 [2024-12-09 11:14:28.140285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.023 [2024-12-09 11:14:28.140298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.023 [2024-12-09 11:14:28.148846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.023 [2024-12-09 11:14:28.148873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.023 [2024-12-09 11:14:28.148887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.023 [2024-12-09 11:14:28.158138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.023 [2024-12-09 11:14:28.158165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.023 [2024-12-09 11:14:28.158178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.023 [2024-12-09 11:14:28.166834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.024 [2024-12-09 11:14:28.166860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.024 [2024-12-09 11:14:28.166872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.024 [2024-12-09 11:14:28.173670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.024 [2024-12-09 11:14:28.173694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.024 [2024-12-09 11:14:28.173706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.024 [2024-12-09 11:14:28.180300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.024 [2024-12-09 11:14:28.180325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.024 [2024-12-09 11:14:28.180338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.024 [2024-12-09 11:14:28.186874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.024 [2024-12-09 11:14:28.186899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.024 [2024-12-09 11:14:28.186911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.024 [2024-12-09 11:14:28.193576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.024 [2024-12-09 11:14:28.193600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.024 [2024-12-09 11:14:28.193612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.284 [2024-12-09 11:14:28.200697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.284 [2024-12-09 11:14:28.200725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.284 [2024-12-09 11:14:28.200737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.284 [2024-12-09 11:14:28.207386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.284 [2024-12-09 11:14:28.207411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.284 [2024-12-09 11:14:28.207423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.284 [2024-12-09 11:14:28.214059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.284 [2024-12-09 11:14:28.214083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.284 [2024-12-09 11:14:28.214099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.284 [2024-12-09 11:14:28.220889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.284 [2024-12-09 11:14:28.220915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.284 [2024-12-09 11:14:28.220927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.284 [2024-12-09 11:14:28.227534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.284 [2024-12-09 11:14:28.227560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.284 [2024-12-09 11:14:28.227571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.234323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.234349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.234360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.240976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.241001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.241013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.247553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.247579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.247590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.254306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.254329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.254341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.260962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.260988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.260999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.267617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.267642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.267660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.274342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.274368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.274379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.281046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.281071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.281082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.287786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.287811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.287823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.294498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.294523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.294535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.301224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.301249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.301260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.308008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.308033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.308045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.314726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.314752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.314764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.321411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.321437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.321449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.328145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.328169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.328185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.334832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.334857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.334869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.341443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.341468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.341480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.348036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.348062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.348073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.354490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.354515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.354528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.360876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.360902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.360914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.367250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.367273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.367286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.373660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.373685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.373697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.380009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.380034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.380046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.386337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.386365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.386378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.392614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.392640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.392659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.398841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.398866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.398878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.405116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.285 [2024-12-09 11:14:28.405141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.285 [2024-12-09 11:14:28.405152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.285 [2024-12-09 11:14:28.411388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.286 [2024-12-09 11:14:28.411413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.286 [2024-12-09 11:14:28.411425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.286 [2024-12-09 11:14:28.417586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.286 [2024-12-09 11:14:28.417611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.286 [2024-12-09 11:14:28.417622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.286 [2024-12-09 11:14:28.423842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.286 [2024-12-09 11:14:28.423867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.286 [2024-12-09 11:14:28.423879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.286 [2024-12-09 11:14:28.430064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.286 [2024-12-09 11:14:28.430089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.286 [2024-12-09 11:14:28.430101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.286 [2024-12-09 11:14:28.436249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.286 [2024-12-09 11:14:28.436274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.286 [2024-12-09 11:14:28.436286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.286 [2024-12-09 11:14:28.442363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.286 [2024-12-09 11:14:28.442387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.286 [2024-12-09 11:14:28.442399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.286 [2024-12-09 11:14:28.448555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.286 [2024-12-09 11:14:28.448580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.286 [2024-12-09 11:14:28.448592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.286 [2024-12-09 11:14:28.454686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.286 [2024-12-09 11:14:28.454710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.286 [2024-12-09 11:14:28.454722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.460802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.460828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.460840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.466966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.466990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.467002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.473073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.473099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.473111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.479205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.479229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.479241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.485290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.485314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.485326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.491355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.491380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.491395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.497422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.497446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.497458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.503508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.503533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.503544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.509797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.509822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.509834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.515894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.515919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.515930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.521532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.521557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.521568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.527199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.527224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.527238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.533125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.533150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.533162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.538956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.538981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.538992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.544852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.544879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.544891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.550839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.550864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.550876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.556937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.556960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.556972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.562991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.563016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.563027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.569087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.569110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.569122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.575146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.575169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.575181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.581195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.581218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.581230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.587293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.547 [2024-12-09 11:14:28.587317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.547 [2024-12-09 11:14:28.587328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.547 [2024-12-09 11:14:28.593360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.593383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.593394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.599483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.599508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.599519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.605574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.605598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.605610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.611640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.611669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.611680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.617699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.617724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.617735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.623727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.623751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.623762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.629719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.629743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.629754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.635767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.635791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.635803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.642847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.642873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.642886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.650896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.650925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.650937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.659191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.659217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.659229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.667785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.667811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.667823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.675890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.675915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.675927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.683811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.683842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.683854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.692386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.692411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.692423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.701069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.701094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.701106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.709879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.709905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.709917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.548 [2024-12-09 11:14:28.719115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.548 [2024-12-09 11:14:28.719142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.548 [2024-12-09 11:14:28.719154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.727529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.727557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.727569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.736016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.736043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.736055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.745623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.745658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.745671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.754607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.754632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.754650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.762906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.762931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.762943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.770128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.770153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.770165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.778307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.778332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.778344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.784991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.785016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.785027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.791067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.791091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.791107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.797160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.797185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.797196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.803256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.803281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.803292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.809395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.809420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.809432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.815524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.815549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.815560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.821814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.821839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.821851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.827892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.827916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.827928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.834025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.834049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.834060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.840097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.809 [2024-12-09 11:14:28.840121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.809 [2024-12-09 11:14:28.840133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.809 [2024-12-09 11:14:28.846185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.846213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.846225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.852270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.852295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.852307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.858252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.858276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.858288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.861521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.861544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.861556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.867424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.867448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.867459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.873522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.873546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.873557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.879608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.879632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.879651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.885684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.885708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.885719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.891858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.891895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.891907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.898014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.898038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.898050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.904124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.904149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.904160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.909687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.909710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.909722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.915549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.915573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.915584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.921525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.921548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.921559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.927236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.927260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.927271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.933200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.933223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.933234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.939084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.939107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.939118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.945198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.945223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.945238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.951288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.951312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.951324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.957183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.957208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.957219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.963290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.963314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.963325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.969385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.969409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.969420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.975443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.975468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.975479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:27.810 [2024-12-09 11:14:28.981579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:27.810 [2024-12-09 11:14:28.981603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:27.810 [2024-12-09 11:14:28.981614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:28.987803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:28.987829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:28.987841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:28.993956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:28.993982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:28.993994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.000030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.000058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:29.000070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.005819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.005844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:29.005856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.011818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.011843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:29.011855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.017564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.017588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:29.017600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.023372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.023396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:29.023408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.029267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.029291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:29.029303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.034995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.035019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:29.035031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.040794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.040819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:29.040830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.046899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.046925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:29.046940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.052657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.052681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.071 [2024-12-09 11:14:29.052692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.071 [2024-12-09 11:14:29.058756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.071 [2024-12-09 11:14:29.058780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.058791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.064868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.064893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.064904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.070543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.070567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.070578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.076652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.076677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.076688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.082778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.082801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.082813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.088852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.088877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.088888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.094934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.094960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.094971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.100987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.101015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.101027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.107052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.107077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.107088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.113116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.113141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.113152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.118987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.119013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.119025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.072 4749.00 IOPS, 593.62 MiB/s [2024-12-09T10:14:29.248Z] [2024-12-09 11:14:29.125593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.125618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.125630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.131758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.131784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.131796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.137900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.137926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.137938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.144061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.144086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.144098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.150190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.150216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.150229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.156327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.156353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.156364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.162401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.162426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.162438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.168369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.168395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.168408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.174474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.174500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.174513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.180692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.180719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.180732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.186904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.186930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.186943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.193070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.193096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.193109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.199457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.199484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.199497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.205669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.205695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.205712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.211904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.211930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.211943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.218143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.218169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.218182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.224309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.072 [2024-12-09 11:14:29.224335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.072 [2024-12-09 11:14:29.224347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.072 [2024-12-09 11:14:29.230450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.073 [2024-12-09 11:14:29.230476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.073 [2024-12-09 11:14:29.230489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.073 [2024-12-09 11:14:29.236584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.073 [2024-12-09 11:14:29.236610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.073 [2024-12-09 11:14:29.236622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.073 [2024-12-09 11:14:29.242767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.073 [2024-12-09 11:14:29.242793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.073 [2024-12-09 11:14:29.242805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.349 [2024-12-09 11:14:29.248955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.349 [2024-12-09 11:14:29.248983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.349 [2024-12-09 11:14:29.248997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.349 [2024-12-09 11:14:29.255119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.349 [2024-12-09 11:14:29.255147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.349 [2024-12-09 11:14:29.255160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.349 [2024-12-09 11:14:29.261251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.349 [2024-12-09 11:14:29.261280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.349 [2024-12-09 11:14:29.261293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.349 [2024-12-09 11:14:29.267360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.349 [2024-12-09 11:14:29.267386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.349 [2024-12-09 11:14:29.267397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.349 [2024-12-09 11:14:29.273480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.349 [2024-12-09 11:14:29.273505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.349 [2024-12-09 11:14:29.273517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.349 [2024-12-09 11:14:29.279602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.349 [2024-12-09 11:14:29.279627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.349 [2024-12-09 11:14:29.279639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.349 [2024-12-09 11:14:29.285699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.349 [2024-12-09 11:14:29.285725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.349 [2024-12-09 11:14:29.285736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.349 [2024-12-09 11:14:29.291415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.349 [2024-12-09 11:14:29.291439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.349 [2024-12-09 11:14:29.291450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.349 [2024-12-09 11:14:29.297526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.349 [2024-12-09 11:14:29.297550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.349 [2024-12-09 11:14:29.297562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.349 [2024-12-09 11:14:29.303633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.349 [2024-12-09 11:14:29.303663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.349 [2024-12-09 11:14:29.303675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.309343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.309367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.309378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.315468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.315492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.315503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.321592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.321616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.321627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.327687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.327711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.327723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.333823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.333847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.333860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.339917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.339942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.339954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.346109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.346134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.346146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.352282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.352307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.352320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.358443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.358469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.358482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.364700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.364726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.364742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.370884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.370910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.370923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.377022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.377048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.377061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.383200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.383226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.383238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.389399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.389426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.389439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.395637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.395672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.395687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.401897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.401925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.401939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.408102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.408129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.408142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.414264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.414291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.414304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.420465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.420492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.420506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.426625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.426659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.426672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.432760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.432785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.432798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.438969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.438995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.439007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.445153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.445179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.445192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.451231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.451255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.451267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.457294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.457316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.457328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.463365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.463389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.463400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.469460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.350 [2024-12-09 11:14:29.469483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.350 [2024-12-09 11:14:29.469498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.350 [2024-12-09 11:14:29.475560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.351 [2024-12-09 11:14:29.475583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.351 [2024-12-09 11:14:29.475595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.351 [2024-12-09 11:14:29.481624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.351 [2024-12-09 11:14:29.481653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.351 [2024-12-09 11:14:29.481665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.351 [2024-12-09 11:14:29.487745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.351 [2024-12-09 11:14:29.487770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.351 [2024-12-09 11:14:29.487781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.351 [2024-12-09 11:14:29.493807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.351 [2024-12-09 11:14:29.493830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.351 [2024-12-09 11:14:29.493842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.351 [2024-12-09 11:14:29.499919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.351 [2024-12-09 11:14:29.499944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.351 [2024-12-09 11:14:29.499955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.351 [2024-12-09 11:14:29.506026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.351 [2024-12-09 11:14:29.506050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.351 [2024-12-09 11:14:29.506062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.351 [2024-12-09 11:14:29.512144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.351 [2024-12-09 11:14:29.512168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.351 [2024-12-09 11:14:29.512179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.351 [2024-12-09 11:14:29.517830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.351 [2024-12-09 11:14:29.517854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.351 [2024-12-09 11:14:29.517866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.523929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.523958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.523971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.530019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.530043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.530054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.536078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.536102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.536113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.542121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.542145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.542157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.548236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.548261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.548273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.554302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.554326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.554338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.560356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.560381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.560392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.566387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.566411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.566422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.572410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.572434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.572445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.578463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.578487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.578499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.584499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.584523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.584535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.590544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.590568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.590580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.596594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.596618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.596630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.602273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.602297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.602308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.608381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.608406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.608417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.614142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.614167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.614179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.620401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.620426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.620438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.626622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.626653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.626668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.632756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.632780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.632793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.638682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.638707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.638719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.644806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.644832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.644844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.650965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.650991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.651003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.656518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.656544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.656557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.662404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.612 [2024-12-09 11:14:29.662428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.612 [2024-12-09 11:14:29.662440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.612 [2024-12-09 11:14:29.668421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.668446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.668457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.674701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.674725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.674736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.681064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.681091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.681103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.687325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.687350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.687361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.693601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.693625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.693636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.699955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.699980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.699991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.706192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.706217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.706229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.712450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.712475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.712487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.718729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.718753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.718765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.725098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.725123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.725135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.731371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.731396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.731411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.737564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.737589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.737601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.743734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.743758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.743770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.749864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.749890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.749903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.756005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.756030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.756041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.762122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.762147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.762158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.768273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.768298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.768309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.774580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.774604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.774616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.613 [2024-12-09 11:14:29.780865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.613 [2024-12-09 11:14:29.780889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.613 [2024-12-09 11:14:29.780900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.787230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.787261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.787272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.793587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.793612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.793623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.799946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.799971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.799982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.806306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.806330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.806341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.812660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.812684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.812695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.819821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.819846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.819858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.826832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.826857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.826870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.833487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.833512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.833524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.839938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.839962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.839974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.846437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.846462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.846473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.852971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.852996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.853007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.859504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.859527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.859538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.866067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.866092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.866103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.872726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.872750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.872762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.879420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.879445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.879456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.886090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.886115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.886127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.892697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.892721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.892732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.899381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.899406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.899422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.906108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.906135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.906147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.912744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.912770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.875 [2024-12-09 11:14:29.912782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.875 [2024-12-09 11:14:29.919230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.875 [2024-12-09 11:14:29.919256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.919267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.925885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.925908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.925920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.932507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.932532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.932543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.939130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.939154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.939166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.945675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.945700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.945712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.952325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.952350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.952362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.959009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.959039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.959050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.965747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.965772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.965784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.972453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.972477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.972488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.978979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.979005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.979016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.985634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.985666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.985678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.992287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.992312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.992324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:29.998916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:29.998940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:29.998952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:30.005608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:30.005633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:30.005653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:30.012348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:30.012372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:30.012384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:30.019084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:30.019109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:30.019120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:30.025761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:30.025786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:30.025798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:30.033156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:30.033181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:30.033193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:30.039356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:30.039381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:30.039393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:28.876 [2024-12-09 11:14:30.045865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:28.876 [2024-12-09 11:14:30.045890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:28.876 [2024-12-09 11:14:30.045902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:29.136 [2024-12-09 11:14:30.052493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.136 [2024-12-09 11:14:30.052518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.136 [2024-12-09 11:14:30.052531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:29.136 [2024-12-09 11:14:30.058974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.136 [2024-12-09 11:14:30.058999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.136 [2024-12-09 11:14:30.059011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:29.136 [2024-12-09 11:14:30.065369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.136 [2024-12-09 11:14:30.065394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.136 [2024-12-09 11:14:30.065406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:29.136 [2024-12-09 11:14:30.071785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.136 [2024-12-09 11:14:30.071811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.136 [2024-12-09 11:14:30.071828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:29.136 [2024-12-09 11:14:30.079634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.136 [2024-12-09 11:14:30.079667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.136 [2024-12-09 11:14:30.079679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:29.136 [2024-12-09 11:14:30.086009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.136 [2024-12-09 11:14:30.086034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.136 [2024-12-09 11:14:30.086045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:29.136 [2024-12-09 11:14:30.092366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.136 [2024-12-09 11:14:30.092391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.136 [2024-12-09 11:14:30.092403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:29.136 [2024-12-09 11:14:30.098784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.136 [2024-12-09 11:14:30.098809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.136 [2024-12-09 11:14:30.098821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:29.136 [2024-12-09 11:14:30.105125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.136 [2024-12-09 11:14:30.105149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.136 [2024-12-09 11:14:30.105161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:29.136 [2024-12-09 11:14:30.111339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.137 [2024-12-09 11:14:30.111363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.137 [2024-12-09 11:14:30.111374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:29.137 [2024-12-09 11:14:30.117536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.137 [2024-12-09 11:14:30.117560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.137 [2024-12-09 11:14:30.117573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:29.137 4849.00 IOPS, 606.12 MiB/s [2024-12-09T10:14:30.313Z] [2024-12-09 11:14:30.124914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cde710) 01:03:29.137 [2024-12-09 11:14:30.124939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:29.137 [2024-12-09 11:14:30.124951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:29.137 01:03:29.137 Latency(us) 01:03:29.137 [2024-12-09T10:14:30.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:29.137 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:03:29.137 nvme0n1 : 2.00 4848.30 606.04 0.00 0.00 3296.89 740.84 11283.59 01:03:29.137 [2024-12-09T10:14:30.313Z] =================================================================================================================== 01:03:29.137 [2024-12-09T10:14:30.313Z] Total : 4848.30 606.04 0.00 0.00 3296.89 740.84 11283.59 01:03:29.137 { 01:03:29.137 "results": [ 01:03:29.137 { 01:03:29.137 "job": "nvme0n1", 01:03:29.137 "core_mask": "0x2", 01:03:29.137 "workload": "randread", 01:03:29.137 "status": "finished", 01:03:29.137 "queue_depth": 16, 01:03:29.137 "io_size": 131072, 01:03:29.137 "runtime": 2.003589, 01:03:29.137 "iops": 4848.299726141439, 01:03:29.137 "mibps": 606.0374657676799, 01:03:29.137 "io_failed": 0, 01:03:29.137 "io_timeout": 0, 01:03:29.137 "avg_latency_us": 3296.890973315072, 01:03:29.137 "min_latency_us": 740.8417391304348, 01:03:29.137 "max_latency_us": 11283.589565217391 01:03:29.137 } 01:03:29.137 ], 01:03:29.137 "core_count": 1 01:03:29.137 } 01:03:29.137 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:03:29.137 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:03:29.137 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:03:29.137 | .driver_specific 01:03:29.137 | .nvme_error 01:03:29.137 | .status_code 01:03:29.137 | .command_transient_transport_error' 01:03:29.137 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 314 > 0 )) 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2547144 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2547144 ']' 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2547144 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2547144 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2547144' 01:03:29.397 killing process with pid 2547144 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2547144 01:03:29.397 Received shutdown signal, test time was about 2.000000 seconds 01:03:29.397 01:03:29.397 Latency(us) 01:03:29.397 [2024-12-09T10:14:30.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:29.397 [2024-12-09T10:14:30.573Z] =================================================================================================================== 01:03:29.397 [2024-12-09T10:14:30.573Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:29.397 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2547144 01:03:29.656 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 01:03:29.656 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:03:29.656 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:03:29.656 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:03:29.656 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:03:29.656 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2547590 01:03:29.656 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2547590 /var/tmp/bperf.sock 01:03:29.656 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 01:03:29.656 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2547590 ']' 01:03:29.657 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:03:29.657 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:29.657 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:03:29.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:03:29.657 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:29.657 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:29.657 [2024-12-09 11:14:30.699347] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:29.657 [2024-12-09 11:14:30.699432] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2547590 ] 01:03:29.657 [2024-12-09 11:14:30.797641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:29.916 [2024-12-09 11:14:30.842531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:29.916 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:29.916 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:03:29.916 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:03:29.916 11:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:03:30.175 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:03:30.175 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.175 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:30.175 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.175 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:30.175 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:30.745 nvme0n1 01:03:30.745 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:03:30.745 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.745 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:30.745 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.745 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:03:30.745 11:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:03:30.745 Running I/O for 2 seconds... 01:03:30.745 [2024-12-09 11:14:31.850375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efcdd0 01:03:30.745 [2024-12-09 11:14:31.851410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:30.745 [2024-12-09 11:14:31.851445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:30.745 [2024-12-09 11:14:31.863975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efd640 01:03:30.745 [2024-12-09 11:14:31.864970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:30.745 [2024-12-09 11:14:31.864995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:30.745 [2024-12-09 11:14:31.877649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efdeb0 01:03:30.745 [2024-12-09 11:14:31.878627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:30.745 [2024-12-09 11:14:31.878654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:03:30.745 [2024-12-09 11:14:31.891429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eff3c8 01:03:30.745 [2024-12-09 11:14:31.892378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:30.745 [2024-12-09 11:14:31.892401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:03:30.745 [2024-12-09 11:14:31.905209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efeb58 01:03:30.745 [2024-12-09 11:14:31.906136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:30.745 [2024-12-09 11:14:31.906158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:03:30.745 [2024-12-09 11:14:31.918948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ede038 01:03:30.745 [2024-12-09 11:14:31.919794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:30.745 [2024-12-09 11:14:31.919817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:03:31.005 [2024-12-09 11:14:31.938021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ede038 01:03:31.005 [2024-12-09 11:14:31.940489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.005 [2024-12-09 11:14:31.940513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:03:31.005 [2024-12-09 11:14:31.951801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efeb58 01:03:31.005 [2024-12-09 11:14:31.954259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.005 [2024-12-09 11:14:31.954283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:03:31.005 [2024-12-09 11:14:31.965608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eff3c8 01:03:31.005 [2024-12-09 11:14:31.968017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.005 [2024-12-09 11:14:31.968040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:03:31.005 [2024-12-09 11:14:31.979403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efdeb0 01:03:31.005 [2024-12-09 11:14:31.981806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.005 [2024-12-09 11:14:31.981829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:03:31.005 [2024-12-09 11:14:31.993177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efd640 01:03:31.005 [2024-12-09 11:14:31.995485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.005 [2024-12-09 11:14:31.995507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:03:31.005 [2024-12-09 11:14:32.006988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efcdd0 01:03:31.005 [2024-12-09 11:14:32.009342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.005 [2024-12-09 11:14:32.009365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:03:31.005 [2024-12-09 11:14:32.020773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efc560 01:03:31.005 [2024-12-09 11:14:32.023074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.005 [2024-12-09 11:14:32.023097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:03:31.005 [2024-12-09 11:14:32.034576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efbcf0 01:03:31.005 [2024-12-09 11:14:32.036789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.036811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:03:31.006 [2024-12-09 11:14:32.048397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efb480 01:03:31.006 [2024-12-09 11:14:32.050660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.050682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:03:31.006 [2024-12-09 11:14:32.062145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efac10 01:03:31.006 [2024-12-09 11:14:32.064391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.064412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:03:31.006 [2024-12-09 11:14:32.075926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efa3a0 01:03:31.006 [2024-12-09 11:14:32.078126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.078148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:03:31.006 [2024-12-09 11:14:32.089672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef9b30 01:03:31.006 [2024-12-09 11:14:32.091810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.091833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:03:31.006 [2024-12-09 11:14:32.103418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef92c0 01:03:31.006 [2024-12-09 11:14:32.105602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.105624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:03:31.006 [2024-12-09 11:14:32.117197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef8a50 01:03:31.006 [2024-12-09 11:14:32.119253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.119275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:03:31.006 [2024-12-09 11:14:32.130964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef81e0 01:03:31.006 [2024-12-09 11:14:32.133075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.133098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:03:31.006 [2024-12-09 11:14:32.144740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef7970 01:03:31.006 [2024-12-09 11:14:32.146772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.146795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:03:31.006 [2024-12-09 11:14:32.158533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef7100 01:03:31.006 [2024-12-09 11:14:32.160559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.160581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:03:31.006 [2024-12-09 11:14:32.172290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef6890 01:03:31.006 [2024-12-09 11:14:32.174253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.006 [2024-12-09 11:14:32.174275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.186050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef6020 01:03:31.267 [2024-12-09 11:14:32.187976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.187998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.200113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef57b0 01:03:31.267 [2024-12-09 11:14:32.202052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.202078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.213893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef4f40 01:03:31.267 [2024-12-09 11:14:32.215850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.215872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.227681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef46d0 01:03:31.267 [2024-12-09 11:14:32.229630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.229657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.241502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef3e60 01:03:31.267 [2024-12-09 11:14:32.243419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.243440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.255245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef35f0 01:03:31.267 [2024-12-09 11:14:32.257128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.257149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.268999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef2d80 01:03:31.267 [2024-12-09 11:14:32.270798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.270820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.282712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef2510 01:03:31.267 [2024-12-09 11:14:32.284521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.284542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.296434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef1ca0 01:03:31.267 [2024-12-09 11:14:32.298223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.298244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.310188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef1430 01:03:31.267 [2024-12-09 11:14:32.311952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.311973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.323904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef0bc0 01:03:31.267 [2024-12-09 11:14:32.325641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.325665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.337636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef0350 01:03:31.267 [2024-12-09 11:14:32.339322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.339344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.351378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eefae0 01:03:31.267 [2024-12-09 11:14:32.353075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.353097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.365078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eef270 01:03:31.267 [2024-12-09 11:14:32.366767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.366791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.378847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eeea00 01:03:31.267 [2024-12-09 11:14:32.380490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.380511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.392616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eee190 01:03:31.267 [2024-12-09 11:14:32.394238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.394259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.406399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eed920 01:03:31.267 [2024-12-09 11:14:32.407999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.408021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.420131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eed0b0 01:03:31.267 [2024-12-09 11:14:32.421688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.421710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:03:31.267 [2024-12-09 11:14:32.433854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eec840 01:03:31.267 [2024-12-09 11:14:32.435403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.267 [2024-12-09 11:14:32.435424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.447600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eebfd0 01:03:31.528 [2024-12-09 11:14:32.449124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.449146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.461375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eeb760 01:03:31.528 [2024-12-09 11:14:32.462816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.462837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.475110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eeaef0 01:03:31.528 [2024-12-09 11:14:32.476579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.476601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.488868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eea680 01:03:31.528 [2024-12-09 11:14:32.490323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.490343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.502617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee9e10 01:03:31.528 [2024-12-09 11:14:32.504032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.504053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.516315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee95a0 01:03:31.528 [2024-12-09 11:14:32.517697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.517718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.530070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee8d30 01:03:31.528 [2024-12-09 11:14:32.531429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.531450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.543842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee84c0 01:03:31.528 [2024-12-09 11:14:32.545175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.545196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.557550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee7c50 01:03:31.528 [2024-12-09 11:14:32.558821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.558851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.571285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee73e0 01:03:31.528 [2024-12-09 11:14:32.572584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.572605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.585011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee6b70 01:03:31.528 [2024-12-09 11:14:32.586279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.586301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.598742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee6300 01:03:31.528 [2024-12-09 11:14:32.600002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.600023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.612505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee5a90 01:03:31.528 [2024-12-09 11:14:32.613718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.613739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.626238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee5220 01:03:31.528 [2024-12-09 11:14:32.627449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.627472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.639981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee49b0 01:03:31.528 [2024-12-09 11:14:32.641163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.641185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.653743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee4140 01:03:31.528 [2024-12-09 11:14:32.654886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.528 [2024-12-09 11:14:32.654907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:03:31.528 [2024-12-09 11:14:32.667458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee38d0 01:03:31.529 [2024-12-09 11:14:32.668586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.529 [2024-12-09 11:14:32.668607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:03:31.529 [2024-12-09 11:14:32.681218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee3060 01:03:31.529 [2024-12-09 11:14:32.682305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.529 [2024-12-09 11:14:32.682326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:03:31.529 [2024-12-09 11:14:32.694956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee27f0 01:03:31.529 [2024-12-09 11:14:32.696032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.529 [2024-12-09 11:14:32.696053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:03:31.803 [2024-12-09 11:14:32.708700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee1f80 01:03:31.803 [2024-12-09 11:14:32.709751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.803 [2024-12-09 11:14:32.709772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:03:31.803 [2024-12-09 11:14:32.722455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee1710 01:03:31.804 [2024-12-09 11:14:32.723469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.723491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.736188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee0ea0 01:03:31.804 [2024-12-09 11:14:32.737134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.737155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.749985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee0630 01:03:31.804 [2024-12-09 11:14:32.750948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.750970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.763737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016edfdc0 01:03:31.804 [2024-12-09 11:14:32.764673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.764694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.777430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016edf550 01:03:31.804 [2024-12-09 11:14:32.778371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.778393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.791178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016edece0 01:03:31.804 [2024-12-09 11:14:32.792098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.792119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.810166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ede470 01:03:31.804 [2024-12-09 11:14:32.812628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.812654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.823899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016edece0 01:03:31.804 [2024-12-09 11:14:32.826329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.826351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.837652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016edf550 01:03:31.804 [2024-12-09 11:14:32.841010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.841031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:03:31.804 18345.00 IOPS, 71.66 MiB/s [2024-12-09T10:14:32.980Z] [2024-12-09 11:14:32.853247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016edfdc0 01:03:31.804 [2024-12-09 11:14:32.855617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.855640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.867025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee0630 01:03:31.804 [2024-12-09 11:14:32.869384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.869406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.880762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee0ea0 01:03:31.804 [2024-12-09 11:14:32.883100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.883123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.894461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee1710 01:03:31.804 [2024-12-09 11:14:32.896795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.896817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.908226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee1f80 01:03:31.804 [2024-12-09 11:14:32.910504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.910525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.921948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee27f0 01:03:31.804 [2024-12-09 11:14:32.924199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.924225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.935661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee3060 01:03:31.804 [2024-12-09 11:14:32.937844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.937866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.949396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee38d0 01:03:31.804 [2024-12-09 11:14:32.951616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:31.804 [2024-12-09 11:14:32.951637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:31.804 [2024-12-09 11:14:32.963111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee4140 01:03:32.064 [2024-12-09 11:14:32.965303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.064 [2024-12-09 11:14:32.965327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:03:32.064 [2024-12-09 11:14:32.976854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee49b0 01:03:32.064 [2024-12-09 11:14:32.979006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.064 [2024-12-09 11:14:32.979027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:03:32.064 [2024-12-09 11:14:32.990600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee5220 01:03:32.064 [2024-12-09 11:14:32.992769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.064 [2024-12-09 11:14:32.992791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:03:32.064 [2024-12-09 11:14:33.004311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee5a90 01:03:32.064 [2024-12-09 11:14:33.006428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.064 [2024-12-09 11:14:33.006449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:03:32.064 [2024-12-09 11:14:33.018057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee6300 01:03:32.064 [2024-12-09 11:14:33.020157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.064 [2024-12-09 11:14:33.020179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.031812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee6b70 01:03:32.065 [2024-12-09 11:14:33.033821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.033843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.045550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee73e0 01:03:32.065 [2024-12-09 11:14:33.047620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.047641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.059366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee7c50 01:03:32.065 [2024-12-09 11:14:33.061401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.061423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.073122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee84c0 01:03:32.065 [2024-12-09 11:14:33.075057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.075079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.086840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee8d30 01:03:32.065 [2024-12-09 11:14:33.088811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.088833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.100616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee95a0 01:03:32.065 [2024-12-09 11:14:33.102580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.102602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.114335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ee9e10 01:03:32.065 [2024-12-09 11:14:33.116261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.116283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.128057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eea680 01:03:32.065 [2024-12-09 11:14:33.129946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.129967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.141775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eeaef0 01:03:32.065 [2024-12-09 11:14:33.143567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.143589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.155446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eeb760 01:03:32.065 [2024-12-09 11:14:33.157283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.157304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.169133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eebfd0 01:03:32.065 [2024-12-09 11:14:33.170943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.170965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.182785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eec840 01:03:32.065 [2024-12-09 11:14:33.184510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.184531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.196399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eed0b0 01:03:32.065 [2024-12-09 11:14:33.198220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.198241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.210397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eed920 01:03:32.065 [2024-12-09 11:14:33.212126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.212148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.224045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eee190 01:03:32.065 [2024-12-09 11:14:33.225716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.225737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:03:32.065 [2024-12-09 11:14:33.237701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eeea00 01:03:32.065 [2024-12-09 11:14:33.239391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.065 [2024-12-09 11:14:33.239414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:03:32.325 [2024-12-09 11:14:33.251360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eef270 01:03:32.325 [2024-12-09 11:14:33.253035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.325 [2024-12-09 11:14:33.253058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:03:32.325 [2024-12-09 11:14:33.265004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eefae0 01:03:32.325 [2024-12-09 11:14:33.266656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.325 [2024-12-09 11:14:33.266679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:03:32.325 [2024-12-09 11:14:33.278693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef0350 01:03:32.325 [2024-12-09 11:14:33.280247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.325 [2024-12-09 11:14:33.280274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:03:32.325 [2024-12-09 11:14:33.292415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef0bc0 01:03:32.325 [2024-12-09 11:14:33.293949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.325 [2024-12-09 11:14:33.293971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:03:32.325 [2024-12-09 11:14:33.306056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef1430 01:03:32.325 [2024-12-09 11:14:33.307536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.325 [2024-12-09 11:14:33.307557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:03:32.325 [2024-12-09 11:14:33.319726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef1ca0 01:03:32.325 [2024-12-09 11:14:33.321181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.325 [2024-12-09 11:14:33.321203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:03:32.325 [2024-12-09 11:14:33.333394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef2510 01:03:32.325 [2024-12-09 11:14:33.334833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.325 [2024-12-09 11:14:33.334856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:03:32.325 [2024-12-09 11:14:33.347040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef2d80 01:03:32.325 [2024-12-09 11:14:33.348534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.325 [2024-12-09 11:14:33.348556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:03:32.325 [2024-12-09 11:14:33.360731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef35f0 01:03:32.325 [2024-12-09 11:14:33.362179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.325 [2024-12-09 11:14:33.362200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:03:32.326 [2024-12-09 11:14:33.374413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef3e60 01:03:32.326 [2024-12-09 11:14:33.375860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.326 [2024-12-09 11:14:33.375881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:03:32.326 [2024-12-09 11:14:33.388078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef46d0 01:03:32.326 [2024-12-09 11:14:33.389504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.326 [2024-12-09 11:14:33.389525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:32.326 [2024-12-09 11:14:33.401761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef4f40 01:03:32.326 [2024-12-09 11:14:33.403143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.326 [2024-12-09 11:14:33.403170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:03:32.326 [2024-12-09 11:14:33.415394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef57b0 01:03:32.326 [2024-12-09 11:14:33.416761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.326 [2024-12-09 11:14:33.416784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:03:32.326 [2024-12-09 11:14:33.429277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef6020 01:03:32.326 [2024-12-09 11:14:33.430561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.326 [2024-12-09 11:14:33.430585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:03:32.326 [2024-12-09 11:14:33.442989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef6890 01:03:32.326 [2024-12-09 11:14:33.444226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.326 [2024-12-09 11:14:33.444248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:03:32.326 [2024-12-09 11:14:33.456659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef7100 01:03:32.326 [2024-12-09 11:14:33.457873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.326 [2024-12-09 11:14:33.457894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:03:32.326 [2024-12-09 11:14:33.470310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef7970 01:03:32.326 [2024-12-09 11:14:33.471500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.326 [2024-12-09 11:14:33.471521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:03:32.326 [2024-12-09 11:14:33.483964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef81e0 01:03:32.326 [2024-12-09 11:14:33.485144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.326 [2024-12-09 11:14:33.485165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:03:32.326 [2024-12-09 11:14:33.497651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef8a50 01:03:32.326 [2024-12-09 11:14:33.498805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.326 [2024-12-09 11:14:33.498826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.511335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef92c0 01:03:32.586 [2024-12-09 11:14:33.512454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.512476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.524998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef9b30 01:03:32.586 [2024-12-09 11:14:33.526092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.526113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.538674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efa3a0 01:03:32.586 [2024-12-09 11:14:33.539758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.539779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.552339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efac10 01:03:32.586 [2024-12-09 11:14:33.553393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.553416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.565986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efb480 01:03:32.586 [2024-12-09 11:14:33.566998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.567019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.579662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efbcf0 01:03:32.586 [2024-12-09 11:14:33.580654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.580675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.593359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efc560 01:03:32.586 [2024-12-09 11:14:33.594322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.594343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.607010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efcdd0 01:03:32.586 [2024-12-09 11:14:33.607973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.607994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.620686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efd640 01:03:32.586 [2024-12-09 11:14:33.621586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.621607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.634315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efdeb0 01:03:32.586 [2024-12-09 11:14:33.635204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.635226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.647960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eff3c8 01:03:32.586 [2024-12-09 11:14:33.648816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.648837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:03:32.586 [2024-12-09 11:14:33.661616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efeb58 01:03:32.586 [2024-12-09 11:14:33.662534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.586 [2024-12-09 11:14:33.662557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:03:32.587 [2024-12-09 11:14:33.675252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ede038 01:03:32.587 [2024-12-09 11:14:33.676148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.587 [2024-12-09 11:14:33.676169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:03:32.587 [2024-12-09 11:14:33.694105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ede038 01:03:32.587 [2024-12-09 11:14:33.696548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.587 [2024-12-09 11:14:33.696569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:03:32.587 [2024-12-09 11:14:33.707730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efeb58 01:03:32.587 [2024-12-09 11:14:33.710153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.587 [2024-12-09 11:14:33.710174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:03:32.587 [2024-12-09 11:14:33.721359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016eff3c8 01:03:32.587 [2024-12-09 11:14:33.723760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.587 [2024-12-09 11:14:33.723781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:03:32.587 [2024-12-09 11:14:33.735011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efdeb0 01:03:32.587 [2024-12-09 11:14:33.737380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.587 [2024-12-09 11:14:33.737400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:03:32.587 [2024-12-09 11:14:33.748617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efd640 01:03:32.587 [2024-12-09 11:14:33.750959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.587 [2024-12-09 11:14:33.750980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:03:32.847 [2024-12-09 11:14:33.762274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efcdd0 01:03:32.847 [2024-12-09 11:14:33.764620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.847 [2024-12-09 11:14:33.764647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:03:32.847 [2024-12-09 11:14:33.775949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efc560 01:03:32.847 [2024-12-09 11:14:33.778274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.847 [2024-12-09 11:14:33.778296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:03:32.847 [2024-12-09 11:14:33.789572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efbcf0 01:03:32.847 [2024-12-09 11:14:33.791856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.847 [2024-12-09 11:14:33.791877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:03:32.847 [2024-12-09 11:14:33.803230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efb480 01:03:32.847 [2024-12-09 11:14:33.805503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.847 [2024-12-09 11:14:33.805523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:03:32.847 [2024-12-09 11:14:33.816886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efac10 01:03:32.847 [2024-12-09 11:14:33.819114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.847 [2024-12-09 11:14:33.819135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:03:32.847 [2024-12-09 11:14:33.830499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016efa3a0 01:03:32.847 [2024-12-09 11:14:33.832699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.847 [2024-12-09 11:14:33.832719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:03:32.847 18470.50 IOPS, 72.15 MiB/s [2024-12-09T10:14:34.023Z] [2024-12-09 11:14:33.844137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa09f0) with pdu=0x200016ef9b30 01:03:32.847 [2024-12-09 11:14:33.846302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:32.847 [2024-12-09 11:14:33.846322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:03:32.847 01:03:32.847 Latency(us) 01:03:32.847 [2024-12-09T10:14:34.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:32.847 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:03:32.847 nvme0n1 : 2.01 18482.93 72.20 0.00 0.00 6919.51 2037.31 22567.18 01:03:32.847 [2024-12-09T10:14:34.023Z] =================================================================================================================== 01:03:32.847 [2024-12-09T10:14:34.023Z] Total : 18482.93 72.20 0.00 0.00 6919.51 2037.31 22567.18 01:03:32.847 { 01:03:32.847 "results": [ 01:03:32.847 { 01:03:32.847 "job": "nvme0n1", 01:03:32.847 "core_mask": "0x2", 01:03:32.847 "workload": "randwrite", 01:03:32.847 "status": "finished", 01:03:32.847 "queue_depth": 128, 01:03:32.847 "io_size": 4096, 01:03:32.847 "runtime": 2.00558, 01:03:32.847 "iops": 18482.932617995793, 01:03:32.847 "mibps": 72.19895553904607, 01:03:32.847 "io_failed": 0, 01:03:32.847 "io_timeout": 0, 01:03:32.847 "avg_latency_us": 6919.507033722072, 01:03:32.847 "min_latency_us": 2037.3147826086956, 01:03:32.847 "max_latency_us": 22567.179130434783 01:03:32.847 } 01:03:32.847 ], 01:03:32.847 "core_count": 1 01:03:32.847 } 01:03:32.847 11:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:03:32.847 11:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:03:32.847 11:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:03:32.847 | .driver_specific 01:03:32.847 | .nvme_error 01:03:32.847 | .status_code 01:03:32.847 | .command_transient_transport_error' 01:03:32.847 11:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2547590 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2547590 ']' 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2547590 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2547590 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2547590' 01:03:33.107 killing process with pid 2547590 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2547590 01:03:33.107 Received shutdown signal, test time was about 2.000000 seconds 01:03:33.107 01:03:33.107 Latency(us) 01:03:33.107 [2024-12-09T10:14:34.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:33.107 [2024-12-09T10:14:34.283Z] =================================================================================================================== 01:03:33.107 [2024-12-09T10:14:34.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:33.107 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2547590 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2548052 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2548052 /var/tmp/bperf.sock 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2548052 ']' 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:03:33.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:33.367 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:33.367 [2024-12-09 11:14:34.487778] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:33.367 [2024-12-09 11:14:34.487853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2548052 ] 01:03:33.367 I/O size of 131072 is greater than zero copy threshold (65536). 01:03:33.367 Zero copy mechanism will not be used. 01:03:33.627 [2024-12-09 11:14:34.582998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:33.627 [2024-12-09 11:14:34.624771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:33.627 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:33.627 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:03:33.628 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:03:33.628 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:03:33.887 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:03:33.887 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:33.887 11:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:33.887 11:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:33.887 11:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:33.887 11:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:03:34.461 nvme0n1 01:03:34.461 11:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:03:34.461 11:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:34.461 11:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:34.461 11:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:34.461 11:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:03:34.461 11:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:03:34.461 I/O size of 131072 is greater than zero copy threshold (65536). 01:03:34.461 Zero copy mechanism will not be used. 01:03:34.461 Running I/O for 2 seconds... 01:03:34.461 [2024-12-09 11:14:35.583913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.461 [2024-12-09 11:14:35.583997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.461 [2024-12-09 11:14:35.584027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.461 [2024-12-09 11:14:35.591093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.461 [2024-12-09 11:14:35.591179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.461 [2024-12-09 11:14:35.591204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.461 [2024-12-09 11:14:35.597097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.461 [2024-12-09 11:14:35.597177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.461 [2024-12-09 11:14:35.597200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.461 [2024-12-09 11:14:35.603082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.461 [2024-12-09 11:14:35.603148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.461 [2024-12-09 11:14:35.603169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.461 [2024-12-09 11:14:35.608957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.461 [2024-12-09 11:14:35.609037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.461 [2024-12-09 11:14:35.609061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.461 [2024-12-09 11:14:35.615940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.461 [2024-12-09 11:14:35.616011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.461 [2024-12-09 11:14:35.616032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.461 [2024-12-09 11:14:35.623848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.461 [2024-12-09 11:14:35.623926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.461 [2024-12-09 11:14:35.623948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.461 [2024-12-09 11:14:35.632425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.461 [2024-12-09 11:14:35.632528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.461 [2024-12-09 11:14:35.632549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.721 [2024-12-09 11:14:35.641208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.721 [2024-12-09 11:14:35.641319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.721 [2024-12-09 11:14:35.641341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.721 [2024-12-09 11:14:35.648843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.721 [2024-12-09 11:14:35.648996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.721 [2024-12-09 11:14:35.649023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.721 [2024-12-09 11:14:35.655752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.721 [2024-12-09 11:14:35.655832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.721 [2024-12-09 11:14:35.655853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.721 [2024-12-09 11:14:35.661662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.661755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.661776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.667465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.667559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.667580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.673618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.673730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.673752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.680163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.680267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.680288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.686734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.686846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.686867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.692966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.693063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.693084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.698804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.698901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.698922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.704661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.704827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.704851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.710476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.710572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.710594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.716484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.716611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.716632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.722934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.723041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.723062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.729477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.729601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.729622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.735993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.736092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.736113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.741821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.741916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.741938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.748198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.748320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.748342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.755533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.755622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.755651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.761691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.761756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.761777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.767973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.768040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.768061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.774601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.774686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.774707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.780438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.780509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.780529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.786203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.786272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.786292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.792346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.792410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.792431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.798977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.799042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.799063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.805310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.805379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.805400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.811527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.811599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.811620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.817490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.722 [2024-12-09 11:14:35.817565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.722 [2024-12-09 11:14:35.817586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.722 [2024-12-09 11:14:35.823426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.823495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.823515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.829319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.829384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.829404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.835237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.835304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.835325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.841193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.841262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.841284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.847132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.847207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.847228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.853000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.853067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.853089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.858841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.858903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.858924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.864787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.864852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.864876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.870767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.870844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.870865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.877192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.877261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.877281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.883293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.883357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.883377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.889343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.889408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.889429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.723 [2024-12-09 11:14:35.895375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.723 [2024-12-09 11:14:35.895446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.723 [2024-12-09 11:14:35.895466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.997 [2024-12-09 11:14:35.901223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.997 [2024-12-09 11:14:35.901307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.997 [2024-12-09 11:14:35.901328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.997 [2024-12-09 11:14:35.907220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.997 [2024-12-09 11:14:35.907289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.997 [2024-12-09 11:14:35.907310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.997 [2024-12-09 11:14:35.913877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.997 [2024-12-09 11:14:35.913944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.997 [2024-12-09 11:14:35.913965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.997 [2024-12-09 11:14:35.919930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.997 [2024-12-09 11:14:35.920006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.997 [2024-12-09 11:14:35.920026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.997 [2024-12-09 11:14:35.925809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.997 [2024-12-09 11:14:35.925882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.925902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.931783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.931850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.931870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.938402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.938490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.938511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.944429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.944515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.944536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.950881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.950953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.950974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.956843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.956927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.956948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.962915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.962980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.963000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.969440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.969548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.969569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.976987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.977138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.977160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.984769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.984871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.984894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.992052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.992136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.992158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:35.998342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:35.998411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:35.998432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.004300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.004378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.004399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.010324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.010392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.010413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.016670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.016774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.016794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.023977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.024079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.024100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.031073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.031143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.031169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.038056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.038124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.038145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.045451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.045543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.045564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.052910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.053015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.053037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.060437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.060544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.060565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.067145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.067253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.067273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.073209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.073296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.073317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.079665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.079792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.079813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.087191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.087319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.087340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.094061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.094141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.094163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.100057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.100151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.100173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.998 [2024-12-09 11:14:36.105908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.998 [2024-12-09 11:14:36.106030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.998 [2024-12-09 11:14:36.106051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.999 [2024-12-09 11:14:36.111767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.999 [2024-12-09 11:14:36.111880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.999 [2024-12-09 11:14:36.111901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.999 [2024-12-09 11:14:36.118233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.999 [2024-12-09 11:14:36.118345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.999 [2024-12-09 11:14:36.118366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.999 [2024-12-09 11:14:36.124439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.999 [2024-12-09 11:14:36.124531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.999 [2024-12-09 11:14:36.124553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.999 [2024-12-09 11:14:36.130927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.999 [2024-12-09 11:14:36.131040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.999 [2024-12-09 11:14:36.131061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.999 [2024-12-09 11:14:36.137486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.999 [2024-12-09 11:14:36.137593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.999 [2024-12-09 11:14:36.137614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.999 [2024-12-09 11:14:36.144113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.999 [2024-12-09 11:14:36.144203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.999 [2024-12-09 11:14:36.144225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:34.999 [2024-12-09 11:14:36.150094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.999 [2024-12-09 11:14:36.150207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.999 [2024-12-09 11:14:36.150228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:34.999 [2024-12-09 11:14:36.156726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.999 [2024-12-09 11:14:36.156833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.999 [2024-12-09 11:14:36.156854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:34.999 [2024-12-09 11:14:36.163378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.999 [2024-12-09 11:14:36.163489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.999 [2024-12-09 11:14:36.163511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:34.999 [2024-12-09 11:14:36.170068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:34.999 [2024-12-09 11:14:36.170179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:34.999 [2024-12-09 11:14:36.170201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.176882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.177094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.259 [2024-12-09 11:14:36.177117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.183409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.183523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.259 [2024-12-09 11:14:36.183545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.190674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.190913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.259 [2024-12-09 11:14:36.190937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.197476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.197586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.259 [2024-12-09 11:14:36.197607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.204958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.205052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.259 [2024-12-09 11:14:36.205079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.211820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.211951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.259 [2024-12-09 11:14:36.211972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.219008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.219150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.259 [2024-12-09 11:14:36.219171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.227030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.227118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.259 [2024-12-09 11:14:36.227140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.234860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.234960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.259 [2024-12-09 11:14:36.234981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.243463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.243557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.259 [2024-12-09 11:14:36.243578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.259 [2024-12-09 11:14:36.251216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.259 [2024-12-09 11:14:36.251338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.251359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.260022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.260133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.260155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.267025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.267107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.267128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.272964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.273037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.273057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.278922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.278984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.279004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.284895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.284965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.284986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.290849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.290909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.290929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.296825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.296891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.296911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.302707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.302773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.302794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.308673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.308738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.308758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.314531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.314595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.314615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.320449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.320527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.320548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.326434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.326509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.326529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.332374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.332449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.332470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.338366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.338444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.338465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.345448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.345540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.345562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.353459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.353600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.353622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.360375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.360490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.360511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.366499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.366595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.366617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.372966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.373093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.373114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.380584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.380692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.380717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.388581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.388679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.388701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.397042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.397106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.397127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.403364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.403433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.403454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.409263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.409329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.409349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.415178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.415247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.415267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.421219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.421293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.421313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.427232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.260 [2024-12-09 11:14:36.427308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.260 [2024-12-09 11:14:36.427328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.260 [2024-12-09 11:14:36.433205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.261 [2024-12-09 11:14:36.433284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.261 [2024-12-09 11:14:36.433304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.439184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.439257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.439278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.445118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.445181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.445202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.451079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.451152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.451174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.457373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.457452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.457473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.464032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.464104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.464124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.470678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.470748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.470768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.476993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.477057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.477077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.483047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.483119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.483139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.488991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.489055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.489075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.494869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.494939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.494959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.500784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.500854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.500874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.506691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.506773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.506793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.512625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.512717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.512737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.518508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.518578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.518598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.524460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.524539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.524560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.530298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.530374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.530394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.536335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.536413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.536434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.542787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.542866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.542890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.549056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.549123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.549144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.555352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.555428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.555448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.561275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.561338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.561358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.567099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.567163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.567184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.573385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.573454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.573475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.522 4765.00 IOPS, 595.62 MiB/s [2024-12-09T10:14:36.698Z] [2024-12-09 11:14:36.580584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.580656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.580677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.586725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.586793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.522 [2024-12-09 11:14:36.586813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.522 [2024-12-09 11:14:36.592833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.522 [2024-12-09 11:14:36.592902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.592923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.599110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.599188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.599211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.605719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.605791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.605813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.611610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.611696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.611717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.617789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.617859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.617880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.623948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.624017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.624038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.630470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.630543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.630564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.636613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.636695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.636716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.642901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.642970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.642991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.649011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.649077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.649099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.655116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.655183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.655204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.661104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.661172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.661193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.666962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.667029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.667049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.673724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.673788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.673808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.680064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.680133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.680153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.686488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.686558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.686579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.523 [2024-12-09 11:14:36.692665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.523 [2024-12-09 11:14:36.692744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.523 [2024-12-09 11:14:36.692766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.698782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.698911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.698932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.704941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.705045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.705070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.711163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.711235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.711256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.717355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.717428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.717449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.723601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.723688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.723708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.729823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.729898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.729918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.736077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.736158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.736179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.742421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.742500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.742521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.749260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.749346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.749367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.755861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.756002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.756022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.762636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.762724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.784 [2024-12-09 11:14:36.762745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.784 [2024-12-09 11:14:36.769285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.784 [2024-12-09 11:14:36.769366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.769386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.776218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.776294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.776315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.782886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.782964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.782985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.789624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.789701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.789721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.796382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.796451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.796471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.803356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.803426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.803447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.810115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.810182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.810203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.816753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.816886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.816906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.823520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.823592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.823612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.830418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.830521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.830542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.837665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.837752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.837773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.845227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.845299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.845320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.852203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.852297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.852322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.861706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.861842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.861862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.869138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.869223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.869244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.876105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.876251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.876272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.883035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.883113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.883139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.889854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.890014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.890036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.896909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.897133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.897154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.904941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.905176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.905200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.913965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.914106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.914128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.923369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.923440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.923461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.931433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.931518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.931540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.938411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.938512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.938533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.945337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.945409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.945430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:35.785 [2024-12-09 11:14:36.952299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:35.785 [2024-12-09 11:14:36.952404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:35.785 [2024-12-09 11:14:36.952430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:36.959282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:36.959430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:36.959452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:36.966869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:36.967090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:36.967114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:36.974727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:36.974935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:36.974957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:36.982743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:36.982943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:36.982964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:36.990508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:36.990732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:36.990753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:36.998534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:36.998669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:36.998690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:37.006843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:37.006957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:37.006978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:37.014667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:37.014773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:37.014794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:37.022414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:37.022677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:37.022700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:37.030332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:37.030578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:37.030601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:37.038546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:37.038663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:37.038685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:37.046655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:37.046879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:37.046900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:37.054255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:37.054485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:37.054509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:37.061836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:37.062068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:37.062091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:37.068914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:37.069053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.063 [2024-12-09 11:14:37.069073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.063 [2024-12-09 11:14:37.076235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.063 [2024-12-09 11:14:37.076435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.076456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.083892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.084112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.084133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.090997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.091149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.091169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.097707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.097786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.097806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.104404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.104496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.104519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.111133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.111238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.111259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.118027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.118129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.118150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.126550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.126682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.126703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.133426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.133534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.133555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.140183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.140275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.140296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.147560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.147687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.147712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.156331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.156430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.156452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.164969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.165146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.165167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.173142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.173214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.173235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.180378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.180537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.180557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.187965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.188072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.188093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.195112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.195196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.195217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.202237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.202315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.202337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.209157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.209226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.209247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.216383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.216460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.216481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.223497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.223572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.223593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.064 [2024-12-09 11:14:37.231330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.064 [2024-12-09 11:14:37.231409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.064 [2024-12-09 11:14:37.231430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.239110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.239179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.239200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.248055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.248170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.248192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.256042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.256114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.256136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.263830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.263921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.263942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.272087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.272176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.272196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.280722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.280868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.280889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.288958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.289028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.289049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.296447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.296583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.296604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.303663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.303786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.303806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.310642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.310720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.310740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.317588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.317671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.317692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.325 [2024-12-09 11:14:37.324835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.325 [2024-12-09 11:14:37.324905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.325 [2024-12-09 11:14:37.324925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.332404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.332479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.332500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.339338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.339449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.339470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.346339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.346409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.346434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.353847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.354016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.354037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.361201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.361273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.361294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.368754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.368826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.368850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.375702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.375773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.375794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.383059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.383140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.383161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.390237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.390360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.390381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.397080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.397159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.397182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.403972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.404133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.404154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.411912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.411998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.412019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.419672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.419743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.419764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.426779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.426854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.426875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.433637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.433726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.433747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.440354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.440427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.440447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.447382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.447523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.447544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.454359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.454461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.454482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.461735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.461803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.461824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.468708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.468829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.468850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.476114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.476182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.476203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.483325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.483406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.483427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.491237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.491303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.491323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.326 [2024-12-09 11:14:37.498372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.326 [2024-12-09 11:14:37.498521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.326 [2024-12-09 11:14:37.498542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.586 [2024-12-09 11:14:37.505602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.586 [2024-12-09 11:14:37.505687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.586 [2024-12-09 11:14:37.505709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.586 [2024-12-09 11:14:37.512820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.586 [2024-12-09 11:14:37.512948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.587 [2024-12-09 11:14:37.512969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.587 [2024-12-09 11:14:37.520373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.587 [2024-12-09 11:14:37.520446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.587 [2024-12-09 11:14:37.520466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.587 [2024-12-09 11:14:37.527512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.587 [2024-12-09 11:14:37.527580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.587 [2024-12-09 11:14:37.527600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.587 [2024-12-09 11:14:37.535243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.587 [2024-12-09 11:14:37.535314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.587 [2024-12-09 11:14:37.535340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.587 [2024-12-09 11:14:37.542194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.587 [2024-12-09 11:14:37.542258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.587 [2024-12-09 11:14:37.542279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.587 [2024-12-09 11:14:37.549008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.587 [2024-12-09 11:14:37.549142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.587 [2024-12-09 11:14:37.549163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.587 [2024-12-09 11:14:37.556016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.587 [2024-12-09 11:14:37.556090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.587 [2024-12-09 11:14:37.556110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:03:36.587 [2024-12-09 11:14:37.562903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.587 [2024-12-09 11:14:37.562981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.587 [2024-12-09 11:14:37.563003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:03:36.587 [2024-12-09 11:14:37.569850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.587 [2024-12-09 11:14:37.569921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.587 [2024-12-09 11:14:37.569941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:03:36.587 [2024-12-09 11:14:37.576912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaa0d30) with pdu=0x200016eff3c8 01:03:36.587 [2024-12-09 11:14:37.576983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:36.587 [2024-12-09 11:14:37.577003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:03:36.587 4548.50 IOPS, 568.56 MiB/s 01:03:36.587 Latency(us) 01:03:36.587 [2024-12-09T10:14:37.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:36.587 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:03:36.587 nvme0n1 : 2.00 4546.93 568.37 0.00 0.00 3513.74 2407.74 12366.36 01:03:36.587 [2024-12-09T10:14:37.763Z] =================================================================================================================== 01:03:36.587 [2024-12-09T10:14:37.763Z] Total : 4546.93 568.37 0.00 0.00 3513.74 2407.74 12366.36 01:03:36.587 { 01:03:36.587 "results": [ 01:03:36.587 { 01:03:36.587 "job": "nvme0n1", 01:03:36.587 "core_mask": "0x2", 01:03:36.587 "workload": "randwrite", 01:03:36.587 "status": "finished", 01:03:36.587 "queue_depth": 16, 01:03:36.587 "io_size": 131072, 01:03:36.587 "runtime": 2.004428, 01:03:36.587 "iops": 4546.933090138434, 01:03:36.587 "mibps": 568.3666362673042, 01:03:36.587 "io_failed": 0, 01:03:36.587 "io_timeout": 0, 01:03:36.587 "avg_latency_us": 3513.73889229184, 01:03:36.587 "min_latency_us": 2407.735652173913, 01:03:36.587 "max_latency_us": 12366.358260869565 01:03:36.587 } 01:03:36.587 ], 01:03:36.587 "core_count": 1 01:03:36.587 } 01:03:36.587 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:03:36.587 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:03:36.587 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:03:36.587 | .driver_specific 01:03:36.587 | .nvme_error 01:03:36.587 | .status_code 01:03:36.587 | .command_transient_transport_error' 01:03:36.587 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:03:36.846 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 294 > 0 )) 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2548052 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2548052 ']' 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2548052 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2548052 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2548052' 01:03:36.847 killing process with pid 2548052 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2548052 01:03:36.847 Received shutdown signal, test time was about 2.000000 seconds 01:03:36.847 01:03:36.847 Latency(us) 01:03:36.847 [2024-12-09T10:14:38.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:36.847 [2024-12-09T10:14:38.023Z] =================================================================================================================== 01:03:36.847 [2024-12-09T10:14:38.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:36.847 11:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2548052 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2546589 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2546589 ']' 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2546589 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2546589 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2546589' 01:03:37.106 killing process with pid 2546589 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2546589 01:03:37.106 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2546589 01:03:37.365 01:03:37.365 real 0m15.709s 01:03:37.365 user 0m30.916s 01:03:37.365 sys 0m5.232s 01:03:37.365 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:37.365 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:03:37.365 ************************************ 01:03:37.365 END TEST nvmf_digest_error 01:03:37.365 ************************************ 01:03:37.365 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 01:03:37.365 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 01:03:37.365 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:37.365 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 01:03:37.365 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:37.365 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 01:03:37.365 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:37.365 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:37.365 rmmod nvme_tcp 01:03:37.365 rmmod nvme_fabrics 01:03:37.624 rmmod nvme_keyring 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2546589 ']' 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2546589 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2546589 ']' 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2546589 01:03:37.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2546589) - No such process 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2546589 is not found' 01:03:37.624 Process with pid 2546589 is not found 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:37.624 11:14:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:39.549 11:14:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:03:39.549 01:03:39.549 real 0m40.755s 01:03:39.549 user 1m4.929s 01:03:39.549 sys 0m15.326s 01:03:39.549 11:14:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:39.549 11:14:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:03:39.549 ************************************ 01:03:39.549 END TEST nvmf_digest 01:03:39.549 ************************************ 01:03:39.549 11:14:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 01:03:39.549 11:14:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 01:03:39.549 11:14:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 01:03:39.549 11:14:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 01:03:39.549 11:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:39.549 11:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:39.549 11:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:03:39.810 ************************************ 01:03:39.810 START TEST nvmf_bdevperf 01:03:39.810 ************************************ 01:03:39.810 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 01:03:39.810 * Looking for test storage... 01:03:39.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 01:03:39.810 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:39.810 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 01:03:39.810 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:39.810 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:39.811 --rc genhtml_branch_coverage=1 01:03:39.811 --rc genhtml_function_coverage=1 01:03:39.811 --rc genhtml_legend=1 01:03:39.811 --rc geninfo_all_blocks=1 01:03:39.811 --rc geninfo_unexecuted_blocks=1 01:03:39.811 01:03:39.811 ' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:39.811 --rc genhtml_branch_coverage=1 01:03:39.811 --rc genhtml_function_coverage=1 01:03:39.811 --rc genhtml_legend=1 01:03:39.811 --rc geninfo_all_blocks=1 01:03:39.811 --rc geninfo_unexecuted_blocks=1 01:03:39.811 01:03:39.811 ' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:39.811 --rc genhtml_branch_coverage=1 01:03:39.811 --rc genhtml_function_coverage=1 01:03:39.811 --rc genhtml_legend=1 01:03:39.811 --rc geninfo_all_blocks=1 01:03:39.811 --rc geninfo_unexecuted_blocks=1 01:03:39.811 01:03:39.811 ' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:39.811 --rc genhtml_branch_coverage=1 01:03:39.811 --rc genhtml_function_coverage=1 01:03:39.811 --rc genhtml_legend=1 01:03:39.811 --rc geninfo_all_blocks=1 01:03:39.811 --rc geninfo_unexecuted_blocks=1 01:03:39.811 01:03:39.811 ' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:39.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:39.811 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:03:39.812 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:03:39.812 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 01:03:39.812 11:14:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:47.938 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:03:47.938 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 01:03:47.938 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:03:47.939 Found 0000:af:00.0 (0x8086 - 0x159b) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:03:47.939 Found 0000:af:00.1 (0x8086 - 0x159b) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:03:47.939 Found net devices under 0000:af:00.0: cvl_0_0 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:03:47.939 Found net devices under 0000:af:00.1: cvl_0_1 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:03:47.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:47.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 01:03:47.939 01:03:47.939 --- 10.0.0.2 ping statistics --- 01:03:47.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:47.939 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:03:47.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:47.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 01:03:47.939 01:03:47.939 --- 10.0.0.1 ping statistics --- 01:03:47.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:47.939 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 01:03:47.939 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2551837 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2551837 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2551837 ']' 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:47.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:47.940 11:14:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:47.940 [2024-12-09 11:14:48.025625] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:47.940 [2024-12-09 11:14:48.025731] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:47.940 [2024-12-09 11:14:48.130387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:03:47.940 [2024-12-09 11:14:48.176496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:47.940 [2024-12-09 11:14:48.176544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:47.940 [2024-12-09 11:14:48.176555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:47.940 [2024-12-09 11:14:48.176564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:47.940 [2024-12-09 11:14:48.176571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:47.940 [2024-12-09 11:14:48.177963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:47.940 [2024-12-09 11:14:48.178054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:47.940 [2024-12-09 11:14:48.178056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:47.940 [2024-12-09 11:14:48.325873] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:47.940 Malloc0 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:47.940 [2024-12-09 11:14:48.378885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:03:47.940 { 01:03:47.940 "params": { 01:03:47.940 "name": "Nvme$subsystem", 01:03:47.940 "trtype": "$TEST_TRANSPORT", 01:03:47.940 "traddr": "$NVMF_FIRST_TARGET_IP", 01:03:47.940 "adrfam": "ipv4", 01:03:47.940 "trsvcid": "$NVMF_PORT", 01:03:47.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:03:47.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:03:47.940 "hdgst": ${hdgst:-false}, 01:03:47.940 "ddgst": ${ddgst:-false} 01:03:47.940 }, 01:03:47.940 "method": "bdev_nvme_attach_controller" 01:03:47.940 } 01:03:47.940 EOF 01:03:47.940 )") 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 01:03:47.940 11:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:03:47.940 "params": { 01:03:47.940 "name": "Nvme1", 01:03:47.940 "trtype": "tcp", 01:03:47.940 "traddr": "10.0.0.2", 01:03:47.940 "adrfam": "ipv4", 01:03:47.940 "trsvcid": "4420", 01:03:47.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:47.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:03:47.940 "hdgst": false, 01:03:47.940 "ddgst": false 01:03:47.940 }, 01:03:47.940 "method": "bdev_nvme_attach_controller" 01:03:47.940 }' 01:03:47.940 [2024-12-09 11:14:48.434865] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:47.940 [2024-12-09 11:14:48.434923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551869 ] 01:03:47.940 [2024-12-09 11:14:48.545938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:47.940 [2024-12-09 11:14:48.597433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:47.940 Running I/O for 1 seconds... 01:03:48.879 10466.00 IOPS, 40.88 MiB/s 01:03:48.879 Latency(us) 01:03:48.879 [2024-12-09T10:14:50.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:48.879 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:03:48.879 Verification LBA range: start 0x0 length 0x4000 01:03:48.879 Nvme1n1 : 1.01 10465.32 40.88 0.00 0.00 12165.94 2820.90 10827.69 01:03:48.879 [2024-12-09T10:14:50.055Z] =================================================================================================================== 01:03:48.879 [2024-12-09T10:14:50.055Z] Total : 10465.32 40.88 0.00 0.00 12165.94 2820.90 10827.69 01:03:48.879 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2552127 01:03:48.879 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 01:03:48.879 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 01:03:48.879 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 01:03:48.879 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 01:03:48.879 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 01:03:48.879 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:03:48.879 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:03:48.879 { 01:03:48.879 "params": { 01:03:48.879 "name": "Nvme$subsystem", 01:03:48.879 "trtype": "$TEST_TRANSPORT", 01:03:48.879 "traddr": "$NVMF_FIRST_TARGET_IP", 01:03:48.879 "adrfam": "ipv4", 01:03:48.879 "trsvcid": "$NVMF_PORT", 01:03:48.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:03:48.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:03:48.879 "hdgst": ${hdgst:-false}, 01:03:48.879 "ddgst": ${ddgst:-false} 01:03:48.879 }, 01:03:48.879 "method": "bdev_nvme_attach_controller" 01:03:48.879 } 01:03:48.879 EOF 01:03:48.879 )") 01:03:48.879 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 01:03:48.879 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 01:03:49.139 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 01:03:49.139 11:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:03:49.139 "params": { 01:03:49.139 "name": "Nvme1", 01:03:49.139 "trtype": "tcp", 01:03:49.139 "traddr": "10.0.0.2", 01:03:49.139 "adrfam": "ipv4", 01:03:49.139 "trsvcid": "4420", 01:03:49.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:49.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:03:49.139 "hdgst": false, 01:03:49.139 "ddgst": false 01:03:49.139 }, 01:03:49.139 "method": "bdev_nvme_attach_controller" 01:03:49.139 }' 01:03:49.139 [2024-12-09 11:14:50.084750] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:49.139 [2024-12-09 11:14:50.084828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552127 ] 01:03:49.139 [2024-12-09 11:14:50.203103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:49.139 [2024-12-09 11:14:50.256818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:49.399 Running I/O for 15 seconds... 01:03:51.715 10161.00 IOPS, 39.69 MiB/s [2024-12-09T10:14:53.153Z] 10498.50 IOPS, 41.01 MiB/s [2024-12-09T10:14:53.153Z] 11:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2551837 01:03:51.977 11:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 01:03:51.977 [2024-12-09 11:14:53.055473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.055535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.055563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.055581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.055600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.055618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.055636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.055798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.055818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.055835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.055855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.055874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.055899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.055915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.055934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.055951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.055968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.055984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.977 [2024-12-09 11:14:53.056823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.977 [2024-12-09 11:14:53.056838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.056855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.056870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.056887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.056902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.056919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.056936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.056953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.056968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.056984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.978 [2024-12-09 11:14:53.057778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.057972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.057989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.058004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.058021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.058036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.058053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.058068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.058085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.058100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.058117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.978 [2024-12-09 11:14:53.058131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.978 [2024-12-09 11:14:53.058149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.979 [2024-12-09 11:14:53.058760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.058794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.058827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.058861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.058893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.058925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.058958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.058976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.058990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.979 [2024-12-09 11:14:53.059440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.979 [2024-12-09 11:14:53.059458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:03:51.980 [2024-12-09 11:14:53.059536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:51.980 [2024-12-09 11:14:53.059831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.059848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97590 is same with the state(6) to be set 01:03:51.980 [2024-12-09 11:14:53.059865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:03:51.980 [2024-12-09 11:14:53.059877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:03:51.980 [2024-12-09 11:14:53.059891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82384 len:8 PRP1 0x0 PRP2 0x0 01:03:51.980 [2024-12-09 11:14:53.059906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.060007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:03:51.980 [2024-12-09 11:14:53.060027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.060043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:03:51.980 [2024-12-09 11:14:53.060059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.060075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:03:51.980 [2024-12-09 11:14:53.060089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.060107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:03:51.980 [2024-12-09 11:14:53.060123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:51.980 [2024-12-09 11:14:53.060137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:51.980 [2024-12-09 11:14:53.064235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:51.980 [2024-12-09 11:14:53.064270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:51.980 [2024-12-09 11:14:53.064996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:51.980 [2024-12-09 11:14:53.065024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:51.980 [2024-12-09 11:14:53.065039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:51.980 [2024-12-09 11:14:53.065299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:51.980 [2024-12-09 11:14:53.065561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:51.980 [2024-12-09 11:14:53.065578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:51.980 [2024-12-09 11:14:53.065594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:51.980 [2024-12-09 11:14:53.065608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:51.980 [2024-12-09 11:14:53.078959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:51.980 [2024-12-09 11:14:53.079422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:51.980 [2024-12-09 11:14:53.079451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:51.980 [2024-12-09 11:14:53.079469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:51.980 [2024-12-09 11:14:53.079740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:51.980 [2024-12-09 11:14:53.080003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:51.980 [2024-12-09 11:14:53.080021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:51.980 [2024-12-09 11:14:53.080036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:51.980 [2024-12-09 11:14:53.080051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:51.980 [2024-12-09 11:14:53.093583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:51.980 [2024-12-09 11:14:53.094081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:51.980 [2024-12-09 11:14:53.094110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:51.980 [2024-12-09 11:14:53.094126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:51.980 [2024-12-09 11:14:53.094387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:51.980 [2024-12-09 11:14:53.094657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:51.980 [2024-12-09 11:14:53.094675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:51.980 [2024-12-09 11:14:53.094695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:51.980 [2024-12-09 11:14:53.094710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:51.980 [2024-12-09 11:14:53.107990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:51.980 [2024-12-09 11:14:53.108511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:51.980 [2024-12-09 11:14:53.108539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:51.980 [2024-12-09 11:14:53.108555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:51.980 [2024-12-09 11:14:53.108825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:51.980 [2024-12-09 11:14:53.109088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:51.980 [2024-12-09 11:14:53.109106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:51.980 [2024-12-09 11:14:53.109121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:51.980 [2024-12-09 11:14:53.109135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:51.980 [2024-12-09 11:14:53.122407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:51.980 [2024-12-09 11:14:53.122888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:51.980 [2024-12-09 11:14:53.122945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:51.980 [2024-12-09 11:14:53.122980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:51.980 [2024-12-09 11:14:53.123466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:51.980 [2024-12-09 11:14:53.123738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:51.980 [2024-12-09 11:14:53.123756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:51.980 [2024-12-09 11:14:53.123772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:51.980 [2024-12-09 11:14:53.123786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:51.980 [2024-12-09 11:14:53.136823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:51.980 [2024-12-09 11:14:53.137350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:51.980 [2024-12-09 11:14:53.137406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:51.981 [2024-12-09 11:14:53.137442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:51.981 [2024-12-09 11:14:53.138054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:51.981 [2024-12-09 11:14:53.138473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:51.981 [2024-12-09 11:14:53.138491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:51.981 [2024-12-09 11:14:53.138506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:51.981 [2024-12-09 11:14:53.138520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.242 [2024-12-09 11:14:53.151313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.242 [2024-12-09 11:14:53.151814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.242 [2024-12-09 11:14:53.151841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.242 [2024-12-09 11:14:53.151858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.242 [2024-12-09 11:14:53.152119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.242 [2024-12-09 11:14:53.152381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.242 [2024-12-09 11:14:53.152399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.242 [2024-12-09 11:14:53.152414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.242 [2024-12-09 11:14:53.152428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.242 [2024-12-09 11:14:53.165733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.242 [2024-12-09 11:14:53.166182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.242 [2024-12-09 11:14:53.166239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.242 [2024-12-09 11:14:53.166274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.242 [2024-12-09 11:14:53.166886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.242 [2024-12-09 11:14:53.167480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.242 [2024-12-09 11:14:53.167506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.242 [2024-12-09 11:14:53.167528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.242 [2024-12-09 11:14:53.167549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.242 [2024-12-09 11:14:53.180897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.242 [2024-12-09 11:14:53.181412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.242 [2024-12-09 11:14:53.181440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.242 [2024-12-09 11:14:53.181455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.242 [2024-12-09 11:14:53.181723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.242 [2024-12-09 11:14:53.181985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.242 [2024-12-09 11:14:53.182004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.242 [2024-12-09 11:14:53.182019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.242 [2024-12-09 11:14:53.182033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.242 [2024-12-09 11:14:53.195313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.242 [2024-12-09 11:14:53.195833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.242 [2024-12-09 11:14:53.195860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.242 [2024-12-09 11:14:53.195880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.242 [2024-12-09 11:14:53.196141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.242 [2024-12-09 11:14:53.196402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.242 [2024-12-09 11:14:53.196420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.242 [2024-12-09 11:14:53.196436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.242 [2024-12-09 11:14:53.196450] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.242 [2024-12-09 11:14:53.209824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.242 [2024-12-09 11:14:53.210347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.242 [2024-12-09 11:14:53.210376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.242 [2024-12-09 11:14:53.210392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.242 [2024-12-09 11:14:53.210661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.242 [2024-12-09 11:14:53.210924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.242 [2024-12-09 11:14:53.210942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.242 [2024-12-09 11:14:53.210957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.242 [2024-12-09 11:14:53.210972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.242 [2024-12-09 11:14:53.224251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.242 [2024-12-09 11:14:53.224798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.242 [2024-12-09 11:14:53.224826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.242 [2024-12-09 11:14:53.224843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.243 [2024-12-09 11:14:53.225104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.243 [2024-12-09 11:14:53.225366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.243 [2024-12-09 11:14:53.225384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.243 [2024-12-09 11:14:53.225399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.243 [2024-12-09 11:14:53.225413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.243 [2024-12-09 11:14:53.238679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.243 [2024-12-09 11:14:53.239221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.243 [2024-12-09 11:14:53.239277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.243 [2024-12-09 11:14:53.239312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.243 [2024-12-09 11:14:53.239841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.243 [2024-12-09 11:14:53.240109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.243 [2024-12-09 11:14:53.240128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.243 [2024-12-09 11:14:53.240143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.243 [2024-12-09 11:14:53.240157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.243 [2024-12-09 11:14:53.253188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.243 [2024-12-09 11:14:53.253708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.243 [2024-12-09 11:14:53.253736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.243 [2024-12-09 11:14:53.253752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.243 [2024-12-09 11:14:53.254012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.243 [2024-12-09 11:14:53.254275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.243 [2024-12-09 11:14:53.254294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.243 [2024-12-09 11:14:53.254309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.243 [2024-12-09 11:14:53.254323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.243 [2024-12-09 11:14:53.267633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.243 [2024-12-09 11:14:53.268175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.243 [2024-12-09 11:14:53.268231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.243 [2024-12-09 11:14:53.268265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.243 [2024-12-09 11:14:53.268788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.243 [2024-12-09 11:14:53.269054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.243 [2024-12-09 11:14:53.269073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.243 [2024-12-09 11:14:53.269088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.243 [2024-12-09 11:14:53.269103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.243 [2024-12-09 11:14:53.282145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.243 [2024-12-09 11:14:53.282661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.243 [2024-12-09 11:14:53.282689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.243 [2024-12-09 11:14:53.282705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.243 [2024-12-09 11:14:53.282964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.243 [2024-12-09 11:14:53.283226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.243 [2024-12-09 11:14:53.283244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.243 [2024-12-09 11:14:53.283264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.243 [2024-12-09 11:14:53.283278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.243 [2024-12-09 11:14:53.296554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.243 [2024-12-09 11:14:53.297073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.243 [2024-12-09 11:14:53.297101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.243 [2024-12-09 11:14:53.297117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.243 [2024-12-09 11:14:53.297378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.243 [2024-12-09 11:14:53.297642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.243 [2024-12-09 11:14:53.297671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.243 [2024-12-09 11:14:53.297686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.243 [2024-12-09 11:14:53.297701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.243 [2024-12-09 11:14:53.310960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.243 [2024-12-09 11:14:53.311481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.243 [2024-12-09 11:14:53.311510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.243 [2024-12-09 11:14:53.311527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.243 [2024-12-09 11:14:53.311798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.243 [2024-12-09 11:14:53.312062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.243 [2024-12-09 11:14:53.312081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.243 [2024-12-09 11:14:53.312096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.243 [2024-12-09 11:14:53.312111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.243 [2024-12-09 11:14:53.325412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.243 [2024-12-09 11:14:53.325862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.243 [2024-12-09 11:14:53.325891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.243 [2024-12-09 11:14:53.325907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.243 [2024-12-09 11:14:53.326166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.243 [2024-12-09 11:14:53.326430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.243 [2024-12-09 11:14:53.326450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.243 [2024-12-09 11:14:53.326465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.243 [2024-12-09 11:14:53.326479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.243 [2024-12-09 11:14:53.340025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.243 [2024-12-09 11:14:53.340505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.243 [2024-12-09 11:14:53.340560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.243 [2024-12-09 11:14:53.340597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.243 [2024-12-09 11:14:53.341091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.243 [2024-12-09 11:14:53.341355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.243 [2024-12-09 11:14:53.341374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.243 [2024-12-09 11:14:53.341391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.243 [2024-12-09 11:14:53.341407] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.243 [2024-12-09 11:14:53.354461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.243 [2024-12-09 11:14:53.354983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.243 [2024-12-09 11:14:53.355040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.243 [2024-12-09 11:14:53.355076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.243 [2024-12-09 11:14:53.355685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.243 [2024-12-09 11:14:53.356145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.243 [2024-12-09 11:14:53.356165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.243 [2024-12-09 11:14:53.356180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.243 [2024-12-09 11:14:53.356193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.243 [2024-12-09 11:14:53.369016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.243 [2024-12-09 11:14:53.369511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.243 [2024-12-09 11:14:53.369539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.244 [2024-12-09 11:14:53.369555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.244 [2024-12-09 11:14:53.369826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.244 [2024-12-09 11:14:53.370092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.244 [2024-12-09 11:14:53.370111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.244 [2024-12-09 11:14:53.370126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.244 [2024-12-09 11:14:53.370140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.244 [2024-12-09 11:14:53.383450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.244 [2024-12-09 11:14:53.383971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.244 [2024-12-09 11:14:53.384000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.244 [2024-12-09 11:14:53.384021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.244 [2024-12-09 11:14:53.384283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.244 [2024-12-09 11:14:53.384546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.244 [2024-12-09 11:14:53.384565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.244 [2024-12-09 11:14:53.384581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.244 [2024-12-09 11:14:53.384596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.244 [2024-12-09 11:14:53.397951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.244 [2024-12-09 11:14:53.398474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.244 [2024-12-09 11:14:53.398502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.244 [2024-12-09 11:14:53.398518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.244 [2024-12-09 11:14:53.398790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.244 [2024-12-09 11:14:53.399054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.244 [2024-12-09 11:14:53.399073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.244 [2024-12-09 11:14:53.399090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.244 [2024-12-09 11:14:53.399104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.244 [2024-12-09 11:14:53.412387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.244 [2024-12-09 11:14:53.412914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.244 [2024-12-09 11:14:53.412942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.244 [2024-12-09 11:14:53.412958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.244 [2024-12-09 11:14:53.413219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.244 [2024-12-09 11:14:53.413483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.244 [2024-12-09 11:14:53.413502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.244 [2024-12-09 11:14:53.413517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.244 [2024-12-09 11:14:53.413532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.505 [2024-12-09 11:14:53.426838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.505 [2024-12-09 11:14:53.427367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.505 [2024-12-09 11:14:53.427395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.505 [2024-12-09 11:14:53.427411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.505 [2024-12-09 11:14:53.427682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.505 [2024-12-09 11:14:53.427951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.505 [2024-12-09 11:14:53.427969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.505 [2024-12-09 11:14:53.427984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.505 [2024-12-09 11:14:53.427998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.505 [2024-12-09 11:14:53.441315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.505 [2024-12-09 11:14:53.441854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.505 [2024-12-09 11:14:53.441912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.505 [2024-12-09 11:14:53.441948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.505 [2024-12-09 11:14:53.442542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.505 [2024-12-09 11:14:53.443124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.505 [2024-12-09 11:14:53.443144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.505 [2024-12-09 11:14:53.443159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.505 [2024-12-09 11:14:53.443173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.505 [2024-12-09 11:14:53.455738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.505 [2024-12-09 11:14:53.456206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.505 [2024-12-09 11:14:53.456235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.505 [2024-12-09 11:14:53.456251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.505 [2024-12-09 11:14:53.456513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.505 [2024-12-09 11:14:53.456786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.505 [2024-12-09 11:14:53.456806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.505 [2024-12-09 11:14:53.456822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.505 [2024-12-09 11:14:53.456836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.505 [2024-12-09 11:14:53.470125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.505 [2024-12-09 11:14:53.470643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.505 [2024-12-09 11:14:53.470716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.505 [2024-12-09 11:14:53.470752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.505 [2024-12-09 11:14:53.471219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.505 [2024-12-09 11:14:53.471482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.505 [2024-12-09 11:14:53.471501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.505 [2024-12-09 11:14:53.471528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.505 [2024-12-09 11:14:53.471542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.505 [2024-12-09 11:14:53.484616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.505 [2024-12-09 11:14:53.485086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.505 [2024-12-09 11:14:53.485115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.505 [2024-12-09 11:14:53.485131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.505 [2024-12-09 11:14:53.485393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.505 [2024-12-09 11:14:53.485665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.505 [2024-12-09 11:14:53.485684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.505 [2024-12-09 11:14:53.485701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.505 [2024-12-09 11:14:53.485716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.505 8883.00 IOPS, 34.70 MiB/s [2024-12-09T10:14:53.681Z] [2024-12-09 11:14:53.499218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.499711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.499741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.506 [2024-12-09 11:14:53.499758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.506 [2024-12-09 11:14:53.500020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.506 [2024-12-09 11:14:53.500284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.506 [2024-12-09 11:14:53.500303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.506 [2024-12-09 11:14:53.500319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.506 [2024-12-09 11:14:53.500334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.506 [2024-12-09 11:14:53.513665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.514226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.514283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.506 [2024-12-09 11:14:53.514318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.506 [2024-12-09 11:14:53.514930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.506 [2024-12-09 11:14:53.515390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.506 [2024-12-09 11:14:53.515409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.506 [2024-12-09 11:14:53.515424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.506 [2024-12-09 11:14:53.515438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.506 [2024-12-09 11:14:53.528241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.528773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.528844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.506 [2024-12-09 11:14:53.528879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.506 [2024-12-09 11:14:53.529319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.506 [2024-12-09 11:14:53.529581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.506 [2024-12-09 11:14:53.529600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.506 [2024-12-09 11:14:53.529615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.506 [2024-12-09 11:14:53.529629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.506 [2024-12-09 11:14:53.542670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.543126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.543154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.506 [2024-12-09 11:14:53.543170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.506 [2024-12-09 11:14:53.543429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.506 [2024-12-09 11:14:53.543699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.506 [2024-12-09 11:14:53.543718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.506 [2024-12-09 11:14:53.543735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.506 [2024-12-09 11:14:53.543749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.506 [2024-12-09 11:14:53.557255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.557783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.557810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.506 [2024-12-09 11:14:53.557826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.506 [2024-12-09 11:14:53.558085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.506 [2024-12-09 11:14:53.558345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.506 [2024-12-09 11:14:53.558364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.506 [2024-12-09 11:14:53.558379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.506 [2024-12-09 11:14:53.558393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.506 [2024-12-09 11:14:53.571687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.572168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.572224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.506 [2024-12-09 11:14:53.572269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.506 [2024-12-09 11:14:53.572883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.506 [2024-12-09 11:14:53.573416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.506 [2024-12-09 11:14:53.573436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.506 [2024-12-09 11:14:53.573452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.506 [2024-12-09 11:14:53.573467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.506 [2024-12-09 11:14:53.586277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.586744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.586772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.506 [2024-12-09 11:14:53.586788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.506 [2024-12-09 11:14:53.587048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.506 [2024-12-09 11:14:53.587312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.506 [2024-12-09 11:14:53.587330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.506 [2024-12-09 11:14:53.587345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.506 [2024-12-09 11:14:53.587360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.506 [2024-12-09 11:14:53.600903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.601431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.601459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.506 [2024-12-09 11:14:53.601475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.506 [2024-12-09 11:14:53.601747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.506 [2024-12-09 11:14:53.602011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.506 [2024-12-09 11:14:53.602030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.506 [2024-12-09 11:14:53.602046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.506 [2024-12-09 11:14:53.602060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.506 [2024-12-09 11:14:53.615335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.615790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.615818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.506 [2024-12-09 11:14:53.615834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.506 [2024-12-09 11:14:53.616095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.506 [2024-12-09 11:14:53.616362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.506 [2024-12-09 11:14:53.616381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.506 [2024-12-09 11:14:53.616396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.506 [2024-12-09 11:14:53.616411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.506 [2024-12-09 11:14:53.629922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.630448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.630509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.506 [2024-12-09 11:14:53.630545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.506 [2024-12-09 11:14:53.631108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.506 [2024-12-09 11:14:53.631510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.506 [2024-12-09 11:14:53.631536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.506 [2024-12-09 11:14:53.631559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.506 [2024-12-09 11:14:53.631579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.506 [2024-12-09 11:14:53.644831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.506 [2024-12-09 11:14:53.645367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.506 [2024-12-09 11:14:53.645425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.507 [2024-12-09 11:14:53.645460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.507 [2024-12-09 11:14:53.646071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.507 [2024-12-09 11:14:53.646565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.507 [2024-12-09 11:14:53.646584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.507 [2024-12-09 11:14:53.646600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.507 [2024-12-09 11:14:53.646614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.507 [2024-12-09 11:14:53.659392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.507 [2024-12-09 11:14:53.659924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.507 [2024-12-09 11:14:53.659988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.507 [2024-12-09 11:14:53.660023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.507 [2024-12-09 11:14:53.660562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.507 [2024-12-09 11:14:53.660835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.507 [2024-12-09 11:14:53.660855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.507 [2024-12-09 11:14:53.660870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.507 [2024-12-09 11:14:53.660888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.507 [2024-12-09 11:14:53.673921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.507 [2024-12-09 11:14:53.674431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.507 [2024-12-09 11:14:53.674461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.507 [2024-12-09 11:14:53.674477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.507 [2024-12-09 11:14:53.674751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.507 [2024-12-09 11:14:53.675014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.507 [2024-12-09 11:14:53.675034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.507 [2024-12-09 11:14:53.675049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.507 [2024-12-09 11:14:53.675064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.769 [2024-12-09 11:14:53.688359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.769 [2024-12-09 11:14:53.688905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.769 [2024-12-09 11:14:53.688962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.769 [2024-12-09 11:14:53.688998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.769 [2024-12-09 11:14:53.689478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.769 [2024-12-09 11:14:53.689752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.769 [2024-12-09 11:14:53.689771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.769 [2024-12-09 11:14:53.689787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.769 [2024-12-09 11:14:53.689802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.769 [2024-12-09 11:14:53.702847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.769 [2024-12-09 11:14:53.703357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.769 [2024-12-09 11:14:53.703413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.769 [2024-12-09 11:14:53.703448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.769 [2024-12-09 11:14:53.703907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.769 [2024-12-09 11:14:53.704172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.769 [2024-12-09 11:14:53.704190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.769 [2024-12-09 11:14:53.704205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.769 [2024-12-09 11:14:53.704220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.769 [2024-12-09 11:14:53.717253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.769 [2024-12-09 11:14:53.717790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.769 [2024-12-09 11:14:53.717846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.769 [2024-12-09 11:14:53.717881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.769 [2024-12-09 11:14:53.718475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.769 [2024-12-09 11:14:53.718939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.769 [2024-12-09 11:14:53.718958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.769 [2024-12-09 11:14:53.718974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.769 [2024-12-09 11:14:53.718988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.769 [2024-12-09 11:14:53.731775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.769 [2024-12-09 11:14:53.732303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.769 [2024-12-09 11:14:53.732331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.769 [2024-12-09 11:14:53.732346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.769 [2024-12-09 11:14:53.732606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.769 [2024-12-09 11:14:53.732880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.769 [2024-12-09 11:14:53.732900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.769 [2024-12-09 11:14:53.732915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.769 [2024-12-09 11:14:53.732929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.769 [2024-12-09 11:14:53.746203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.769 [2024-12-09 11:14:53.746735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.769 [2024-12-09 11:14:53.746764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.769 [2024-12-09 11:14:53.746781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.769 [2024-12-09 11:14:53.747042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.769 [2024-12-09 11:14:53.747306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.769 [2024-12-09 11:14:53.747325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.769 [2024-12-09 11:14:53.747340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.769 [2024-12-09 11:14:53.747355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.769 [2024-12-09 11:14:53.760625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.769 [2024-12-09 11:14:53.761108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.769 [2024-12-09 11:14:53.761166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.769 [2024-12-09 11:14:53.761209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.769 [2024-12-09 11:14:53.761695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.769 [2024-12-09 11:14:53.761959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.769 [2024-12-09 11:14:53.761978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.769 [2024-12-09 11:14:53.761993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.769 [2024-12-09 11:14:53.762008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.769 [2024-12-09 11:14:53.775043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.769 [2024-12-09 11:14:53.775571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.769 [2024-12-09 11:14:53.775598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.769 [2024-12-09 11:14:53.775614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.769 [2024-12-09 11:14:53.775885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.769 [2024-12-09 11:14:53.776148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.769 [2024-12-09 11:14:53.776166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.769 [2024-12-09 11:14:53.776181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.769 [2024-12-09 11:14:53.776195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.769 [2024-12-09 11:14:53.789469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.769 [2024-12-09 11:14:53.789997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.769 [2024-12-09 11:14:53.790025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.769 [2024-12-09 11:14:53.790041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.769 [2024-12-09 11:14:53.790301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.769 [2024-12-09 11:14:53.790565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.769 [2024-12-09 11:14:53.790583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.769 [2024-12-09 11:14:53.790598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.769 [2024-12-09 11:14:53.790612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.769 [2024-12-09 11:14:53.803905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.769 [2024-12-09 11:14:53.804315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.769 [2024-12-09 11:14:53.804343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.770 [2024-12-09 11:14:53.804359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.770 [2024-12-09 11:14:53.804619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.770 [2024-12-09 11:14:53.804894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.770 [2024-12-09 11:14:53.804917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.770 [2024-12-09 11:14:53.804933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.770 [2024-12-09 11:14:53.804947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.770 [2024-12-09 11:14:53.818453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.770 [2024-12-09 11:14:53.818998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.770 [2024-12-09 11:14:53.819054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.770 [2024-12-09 11:14:53.819089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.770 [2024-12-09 11:14:53.819701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.770 [2024-12-09 11:14:53.820179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.770 [2024-12-09 11:14:53.820198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.770 [2024-12-09 11:14:53.820214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.770 [2024-12-09 11:14:53.820228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.770 [2024-12-09 11:14:53.833014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.770 [2024-12-09 11:14:53.833493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.770 [2024-12-09 11:14:53.833549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.770 [2024-12-09 11:14:53.833585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.770 [2024-12-09 11:14:53.834121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.770 [2024-12-09 11:14:53.834386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.770 [2024-12-09 11:14:53.834406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.770 [2024-12-09 11:14:53.834422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.770 [2024-12-09 11:14:53.834437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.770 [2024-12-09 11:14:53.847497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.770 [2024-12-09 11:14:53.848027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.770 [2024-12-09 11:14:53.848056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.770 [2024-12-09 11:14:53.848072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.770 [2024-12-09 11:14:53.848332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.770 [2024-12-09 11:14:53.848596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.770 [2024-12-09 11:14:53.848615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.770 [2024-12-09 11:14:53.848631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.770 [2024-12-09 11:14:53.848660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.770 [2024-12-09 11:14:53.861951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.770 [2024-12-09 11:14:53.862460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.770 [2024-12-09 11:14:53.862488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.770 [2024-12-09 11:14:53.862504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.770 [2024-12-09 11:14:53.862774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.770 [2024-12-09 11:14:53.863038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.770 [2024-12-09 11:14:53.863057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.770 [2024-12-09 11:14:53.863072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.770 [2024-12-09 11:14:53.863086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.770 [2024-12-09 11:14:53.876364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.770 [2024-12-09 11:14:53.876883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.770 [2024-12-09 11:14:53.876939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.770 [2024-12-09 11:14:53.876973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.770 [2024-12-09 11:14:53.877391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.770 [2024-12-09 11:14:53.877663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.770 [2024-12-09 11:14:53.877683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.770 [2024-12-09 11:14:53.877699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.770 [2024-12-09 11:14:53.877713] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.770 [2024-12-09 11:14:53.890980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.770 [2024-12-09 11:14:53.891450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.770 [2024-12-09 11:14:53.891506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.770 [2024-12-09 11:14:53.891541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.770 [2024-12-09 11:14:53.892033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.770 [2024-12-09 11:14:53.892296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.770 [2024-12-09 11:14:53.892314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.770 [2024-12-09 11:14:53.892330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.770 [2024-12-09 11:14:53.892344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.770 [2024-12-09 11:14:53.905378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.770 [2024-12-09 11:14:53.905914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.770 [2024-12-09 11:14:53.905942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.770 [2024-12-09 11:14:53.905959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.770 [2024-12-09 11:14:53.906219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.770 [2024-12-09 11:14:53.906482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.770 [2024-12-09 11:14:53.906501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.770 [2024-12-09 11:14:53.906516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.770 [2024-12-09 11:14:53.906530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.770 [2024-12-09 11:14:53.919805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.770 [2024-12-09 11:14:53.920343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.770 [2024-12-09 11:14:53.920399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.770 [2024-12-09 11:14:53.920435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.770 [2024-12-09 11:14:53.921047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.770 [2024-12-09 11:14:53.921510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.770 [2024-12-09 11:14:53.921528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.770 [2024-12-09 11:14:53.921543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.770 [2024-12-09 11:14:53.921558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:52.770 [2024-12-09 11:14:53.934341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:52.770 [2024-12-09 11:14:53.934888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:52.770 [2024-12-09 11:14:53.934946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:52.770 [2024-12-09 11:14:53.934980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:52.770 [2024-12-09 11:14:53.935501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:52.770 [2024-12-09 11:14:53.935775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:52.770 [2024-12-09 11:14:53.935794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:52.770 [2024-12-09 11:14:53.935810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:52.770 [2024-12-09 11:14:53.935824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.031 [2024-12-09 11:14:53.948884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.031 [2024-12-09 11:14:53.949430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.031 [2024-12-09 11:14:53.949486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.031 [2024-12-09 11:14:53.949522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.031 [2024-12-09 11:14:53.950142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.031 [2024-12-09 11:14:53.950609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.031 [2024-12-09 11:14:53.950628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.031 [2024-12-09 11:14:53.950652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.031 [2024-12-09 11:14:53.950667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.032 [2024-12-09 11:14:53.963451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.032 [2024-12-09 11:14:53.963945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.032 [2024-12-09 11:14:53.964003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.032 [2024-12-09 11:14:53.964039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.032 [2024-12-09 11:14:53.964536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.032 [2024-12-09 11:14:53.964812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.032 [2024-12-09 11:14:53.964831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.032 [2024-12-09 11:14:53.964847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.032 [2024-12-09 11:14:53.964861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.032 [2024-12-09 11:14:53.977938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.032 [2024-12-09 11:14:53.978460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.032 [2024-12-09 11:14:53.978488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.032 [2024-12-09 11:14:53.978503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.032 [2024-12-09 11:14:53.978774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.032 [2024-12-09 11:14:53.979039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.032 [2024-12-09 11:14:53.979058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.032 [2024-12-09 11:14:53.979073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.032 [2024-12-09 11:14:53.979087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.032 [2024-12-09 11:14:53.992355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.032 [2024-12-09 11:14:53.992886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.032 [2024-12-09 11:14:53.992914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.032 [2024-12-09 11:14:53.992931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.032 [2024-12-09 11:14:53.993189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.032 [2024-12-09 11:14:53.993450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.032 [2024-12-09 11:14:53.993473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.032 [2024-12-09 11:14:53.993488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.032 [2024-12-09 11:14:53.993503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.032 [2024-12-09 11:14:54.006802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.032 [2024-12-09 11:14:54.007324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.032 [2024-12-09 11:14:54.007353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.032 [2024-12-09 11:14:54.007368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.032 [2024-12-09 11:14:54.007627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.032 [2024-12-09 11:14:54.007899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.032 [2024-12-09 11:14:54.007919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.032 [2024-12-09 11:14:54.007934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.032 [2024-12-09 11:14:54.007948] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.032 [2024-12-09 11:14:54.021218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.032 [2024-12-09 11:14:54.021742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.032 [2024-12-09 11:14:54.021770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.032 [2024-12-09 11:14:54.021787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.032 [2024-12-09 11:14:54.022048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.032 [2024-12-09 11:14:54.022311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.032 [2024-12-09 11:14:54.022330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.032 [2024-12-09 11:14:54.022345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.032 [2024-12-09 11:14:54.022360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.032 [2024-12-09 11:14:54.035624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.032 [2024-12-09 11:14:54.036073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.032 [2024-12-09 11:14:54.036102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.032 [2024-12-09 11:14:54.036118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.032 [2024-12-09 11:14:54.036379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.032 [2024-12-09 11:14:54.036642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.032 [2024-12-09 11:14:54.036669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.032 [2024-12-09 11:14:54.036685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.032 [2024-12-09 11:14:54.036703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.032 [2024-12-09 11:14:54.050221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.032 [2024-12-09 11:14:54.050711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.032 [2024-12-09 11:14:54.050739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.032 [2024-12-09 11:14:54.050756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.032 [2024-12-09 11:14:54.051016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.032 [2024-12-09 11:14:54.051280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.032 [2024-12-09 11:14:54.051298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.032 [2024-12-09 11:14:54.051314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.032 [2024-12-09 11:14:54.051329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.032 [2024-12-09 11:14:54.064628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.032 [2024-12-09 11:14:54.065161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.032 [2024-12-09 11:14:54.065225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.032 [2024-12-09 11:14:54.065260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.032 [2024-12-09 11:14:54.065803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.032 [2024-12-09 11:14:54.066067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.032 [2024-12-09 11:14:54.066085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.032 [2024-12-09 11:14:54.066101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.032 [2024-12-09 11:14:54.066115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.032 [2024-12-09 11:14:54.079291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.032 [2024-12-09 11:14:54.079820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.032 [2024-12-09 11:14:54.079848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.032 [2024-12-09 11:14:54.079865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.032 [2024-12-09 11:14:54.080125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.032 [2024-12-09 11:14:54.080389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.032 [2024-12-09 11:14:54.080408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.032 [2024-12-09 11:14:54.080423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.032 [2024-12-09 11:14:54.080437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.032 [2024-12-09 11:14:54.093726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.032 [2024-12-09 11:14:54.094226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.032 [2024-12-09 11:14:54.094258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.032 [2024-12-09 11:14:54.094274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.032 [2024-12-09 11:14:54.094535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.032 [2024-12-09 11:14:54.094809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.032 [2024-12-09 11:14:54.094828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.032 [2024-12-09 11:14:54.094843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.033 [2024-12-09 11:14:54.094858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.033 [2024-12-09 11:14:54.108157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.033 [2024-12-09 11:14:54.108695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.033 [2024-12-09 11:14:54.108725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.033 [2024-12-09 11:14:54.108742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.033 [2024-12-09 11:14:54.109004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.033 [2024-12-09 11:14:54.109268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.033 [2024-12-09 11:14:54.109287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.033 [2024-12-09 11:14:54.109302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.033 [2024-12-09 11:14:54.109317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.033 [2024-12-09 11:14:54.122599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.033 [2024-12-09 11:14:54.123131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.033 [2024-12-09 11:14:54.123160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.033 [2024-12-09 11:14:54.123176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.033 [2024-12-09 11:14:54.123437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.033 [2024-12-09 11:14:54.123710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.033 [2024-12-09 11:14:54.123730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.033 [2024-12-09 11:14:54.123745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.033 [2024-12-09 11:14:54.123760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.033 [2024-12-09 11:14:54.137089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.033 [2024-12-09 11:14:54.137613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.033 [2024-12-09 11:14:54.137642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.033 [2024-12-09 11:14:54.137668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.033 [2024-12-09 11:14:54.137932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.033 [2024-12-09 11:14:54.138195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.033 [2024-12-09 11:14:54.138213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.033 [2024-12-09 11:14:54.138228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.033 [2024-12-09 11:14:54.138242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.033 [2024-12-09 11:14:54.151515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.033 [2024-12-09 11:14:54.151987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.033 [2024-12-09 11:14:54.152015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.033 [2024-12-09 11:14:54.152031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.033 [2024-12-09 11:14:54.152291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.033 [2024-12-09 11:14:54.152555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.033 [2024-12-09 11:14:54.152574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.033 [2024-12-09 11:14:54.152589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.033 [2024-12-09 11:14:54.152604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.033 [2024-12-09 11:14:54.166137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.033 [2024-12-09 11:14:54.166640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.033 [2024-12-09 11:14:54.166674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.033 [2024-12-09 11:14:54.166690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.033 [2024-12-09 11:14:54.166950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.033 [2024-12-09 11:14:54.167213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.033 [2024-12-09 11:14:54.167232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.033 [2024-12-09 11:14:54.167248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.033 [2024-12-09 11:14:54.167262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.033 [2024-12-09 11:14:54.180532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.033 [2024-12-09 11:14:54.181058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.033 [2024-12-09 11:14:54.181086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.033 [2024-12-09 11:14:54.181102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.033 [2024-12-09 11:14:54.181364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.033 [2024-12-09 11:14:54.181628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.033 [2024-12-09 11:14:54.181662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.033 [2024-12-09 11:14:54.181678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.033 [2024-12-09 11:14:54.181693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.033 [2024-12-09 11:14:54.194969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.033 [2024-12-09 11:14:54.195495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.033 [2024-12-09 11:14:54.195523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.033 [2024-12-09 11:14:54.195539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.033 [2024-12-09 11:14:54.195809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.033 [2024-12-09 11:14:54.196074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.033 [2024-12-09 11:14:54.196093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.033 [2024-12-09 11:14:54.196108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.033 [2024-12-09 11:14:54.196123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.295 [2024-12-09 11:14:54.209471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.295 [2024-12-09 11:14:54.209946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.295 [2024-12-09 11:14:54.209977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.295 [2024-12-09 11:14:54.209993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.295 [2024-12-09 11:14:54.210254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.295 [2024-12-09 11:14:54.210518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.295 [2024-12-09 11:14:54.210537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.295 [2024-12-09 11:14:54.210553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.295 [2024-12-09 11:14:54.210567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.295 [2024-12-09 11:14:54.224106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.295 [2024-12-09 11:14:54.224554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.295 [2024-12-09 11:14:54.224582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.295 [2024-12-09 11:14:54.224598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.295 [2024-12-09 11:14:54.224869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.295 [2024-12-09 11:14:54.225134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.295 [2024-12-09 11:14:54.225153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.295 [2024-12-09 11:14:54.225168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.295 [2024-12-09 11:14:54.225182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.295 [2024-12-09 11:14:54.238736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.295 [2024-12-09 11:14:54.239219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.295 [2024-12-09 11:14:54.239247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.295 [2024-12-09 11:14:54.239263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.295 [2024-12-09 11:14:54.239525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.295 [2024-12-09 11:14:54.239798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.295 [2024-12-09 11:14:54.239817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.295 [2024-12-09 11:14:54.239833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.295 [2024-12-09 11:14:54.239847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.295 [2024-12-09 11:14:54.253152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.295 [2024-12-09 11:14:54.253688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.295 [2024-12-09 11:14:54.253717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.295 [2024-12-09 11:14:54.253733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.295 [2024-12-09 11:14:54.253994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.295 [2024-12-09 11:14:54.254257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.295 [2024-12-09 11:14:54.254276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.295 [2024-12-09 11:14:54.254292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.295 [2024-12-09 11:14:54.254306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.295 [2024-12-09 11:14:54.267610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.295 [2024-12-09 11:14:54.268024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.295 [2024-12-09 11:14:54.268053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.295 [2024-12-09 11:14:54.268069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.295 [2024-12-09 11:14:54.268331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.295 [2024-12-09 11:14:54.268595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.295 [2024-12-09 11:14:54.268614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.295 [2024-12-09 11:14:54.268629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.295 [2024-12-09 11:14:54.268651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.295 [2024-12-09 11:14:54.282182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.295 [2024-12-09 11:14:54.282716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.295 [2024-12-09 11:14:54.282749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.295 [2024-12-09 11:14:54.282766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.295 [2024-12-09 11:14:54.283029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.295 [2024-12-09 11:14:54.283291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.295 [2024-12-09 11:14:54.283311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.295 [2024-12-09 11:14:54.283326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.283340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.296 [2024-12-09 11:14:54.296661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.296 [2024-12-09 11:14:54.297187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.296 [2024-12-09 11:14:54.297215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.296 [2024-12-09 11:14:54.297231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.296 [2024-12-09 11:14:54.297492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.296 [2024-12-09 11:14:54.297764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.296 [2024-12-09 11:14:54.297783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.296 [2024-12-09 11:14:54.297799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.297813] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.296 [2024-12-09 11:14:54.311103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.296 [2024-12-09 11:14:54.311626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.296 [2024-12-09 11:14:54.311662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.296 [2024-12-09 11:14:54.311679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.296 [2024-12-09 11:14:54.311940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.296 [2024-12-09 11:14:54.312203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.296 [2024-12-09 11:14:54.312222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.296 [2024-12-09 11:14:54.312237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.312251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.296 [2024-12-09 11:14:54.325552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.296 [2024-12-09 11:14:54.326081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.296 [2024-12-09 11:14:54.326109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.296 [2024-12-09 11:14:54.326125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.296 [2024-12-09 11:14:54.326390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.296 [2024-12-09 11:14:54.326661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.296 [2024-12-09 11:14:54.326680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.296 [2024-12-09 11:14:54.326695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.326709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.296 [2024-12-09 11:14:54.340000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.296 [2024-12-09 11:14:54.340437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.296 [2024-12-09 11:14:54.340464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.296 [2024-12-09 11:14:54.340480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.296 [2024-12-09 11:14:54.340748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.296 [2024-12-09 11:14:54.341011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.296 [2024-12-09 11:14:54.341029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.296 [2024-12-09 11:14:54.341044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.341058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.296 [2024-12-09 11:14:54.354599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.296 [2024-12-09 11:14:54.355108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.296 [2024-12-09 11:14:54.355136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.296 [2024-12-09 11:14:54.355151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.296 [2024-12-09 11:14:54.355411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.296 [2024-12-09 11:14:54.355681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.296 [2024-12-09 11:14:54.355701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.296 [2024-12-09 11:14:54.355716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.355731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.296 [2024-12-09 11:14:54.369031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.296 [2024-12-09 11:14:54.369506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.296 [2024-12-09 11:14:54.369533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.296 [2024-12-09 11:14:54.369549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.296 [2024-12-09 11:14:54.369817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.296 [2024-12-09 11:14:54.370082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.296 [2024-12-09 11:14:54.370100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.296 [2024-12-09 11:14:54.370119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.370134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.296 [2024-12-09 11:14:54.383430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.296 [2024-12-09 11:14:54.383889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.296 [2024-12-09 11:14:54.383917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.296 [2024-12-09 11:14:54.383933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.296 [2024-12-09 11:14:54.384194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.296 [2024-12-09 11:14:54.384458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.296 [2024-12-09 11:14:54.384476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.296 [2024-12-09 11:14:54.384491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.384505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.296 [2024-12-09 11:14:54.398034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.296 [2024-12-09 11:14:54.398484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.296 [2024-12-09 11:14:54.398511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.296 [2024-12-09 11:14:54.398527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.296 [2024-12-09 11:14:54.398795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.296 [2024-12-09 11:14:54.399057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.296 [2024-12-09 11:14:54.399075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.296 [2024-12-09 11:14:54.399090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.399104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.296 [2024-12-09 11:14:54.412639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.296 [2024-12-09 11:14:54.413099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.296 [2024-12-09 11:14:54.413126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.296 [2024-12-09 11:14:54.413142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.296 [2024-12-09 11:14:54.413401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.296 [2024-12-09 11:14:54.413672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.296 [2024-12-09 11:14:54.413690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.296 [2024-12-09 11:14:54.413706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.413720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.296 [2024-12-09 11:14:54.427189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.296 [2024-12-09 11:14:54.427704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.296 [2024-12-09 11:14:54.427732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.296 [2024-12-09 11:14:54.427748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.296 [2024-12-09 11:14:54.428008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.296 [2024-12-09 11:14:54.428270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.296 [2024-12-09 11:14:54.428289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.296 [2024-12-09 11:14:54.428304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.296 [2024-12-09 11:14:54.428318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.297 [2024-12-09 11:14:54.441609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.297 [2024-12-09 11:14:54.442068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.297 [2024-12-09 11:14:54.442096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.297 [2024-12-09 11:14:54.442112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.297 [2024-12-09 11:14:54.442372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.297 [2024-12-09 11:14:54.442634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.297 [2024-12-09 11:14:54.442660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.297 [2024-12-09 11:14:54.442675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.297 [2024-12-09 11:14:54.442689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.297 [2024-12-09 11:14:54.456203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.297 [2024-12-09 11:14:54.456717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.297 [2024-12-09 11:14:54.456745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.297 [2024-12-09 11:14:54.456760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.297 [2024-12-09 11:14:54.457021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.297 [2024-12-09 11:14:54.457284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.297 [2024-12-09 11:14:54.457302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.297 [2024-12-09 11:14:54.457317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.297 [2024-12-09 11:14:54.457331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.557 [2024-12-09 11:14:54.470637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.557 [2024-12-09 11:14:54.471085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.557 [2024-12-09 11:14:54.471116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.557 [2024-12-09 11:14:54.471132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.558 [2024-12-09 11:14:54.471392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.558 [2024-12-09 11:14:54.471663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.558 [2024-12-09 11:14:54.471682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.558 [2024-12-09 11:14:54.471697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.558 [2024-12-09 11:14:54.471711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.558 [2024-12-09 11:14:54.485234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.558 [2024-12-09 11:14:54.485759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.558 [2024-12-09 11:14:54.485817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.558 [2024-12-09 11:14:54.485852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.558 [2024-12-09 11:14:54.486446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.558 [2024-12-09 11:14:54.486866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.558 [2024-12-09 11:14:54.486884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.558 [2024-12-09 11:14:54.486900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.558 [2024-12-09 11:14:54.486913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.558 6662.25 IOPS, 26.02 MiB/s [2024-12-09T10:14:54.734Z] [2024-12-09 11:14:54.499637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.558 [2024-12-09 11:14:54.500078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.558 [2024-12-09 11:14:54.500106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.558 [2024-12-09 11:14:54.500121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.558 [2024-12-09 11:14:54.500381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.558 [2024-12-09 11:14:54.500643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.558 [2024-12-09 11:14:54.500670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.558 [2024-12-09 11:14:54.500685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.558 [2024-12-09 11:14:54.500700] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.558 [2024-12-09 11:14:54.514213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.558 [2024-12-09 11:14:54.514712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.558 [2024-12-09 11:14:54.514770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.558 [2024-12-09 11:14:54.514806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.558 [2024-12-09 11:14:54.515407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.558 [2024-12-09 11:14:54.515886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.558 [2024-12-09 11:14:54.515905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.558 [2024-12-09 11:14:54.515921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.558 [2024-12-09 11:14:54.515937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.558 [2024-12-09 11:14:54.528732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.558 [2024-12-09 11:14:54.529123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.558 [2024-12-09 11:14:54.529151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.558 [2024-12-09 11:14:54.529168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.558 [2024-12-09 11:14:54.529428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.558 [2024-12-09 11:14:54.529701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.558 [2024-12-09 11:14:54.529720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.558 [2024-12-09 11:14:54.529736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.558 [2024-12-09 11:14:54.529750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.558 [2024-12-09 11:14:54.543280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.558 [2024-12-09 11:14:54.543758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.558 [2024-12-09 11:14:54.543786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.558 [2024-12-09 11:14:54.543801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.558 [2024-12-09 11:14:54.544063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.558 [2024-12-09 11:14:54.544326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.558 [2024-12-09 11:14:54.544344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.558 [2024-12-09 11:14:54.544359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.558 [2024-12-09 11:14:54.544373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.558 [2024-12-09 11:14:54.557698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.558 [2024-12-09 11:14:54.558162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.558 [2024-12-09 11:14:54.558218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.558 [2024-12-09 11:14:54.558254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.558 [2024-12-09 11:14:54.558762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.558 [2024-12-09 11:14:54.559025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.558 [2024-12-09 11:14:54.559044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.558 [2024-12-09 11:14:54.559063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.558 [2024-12-09 11:14:54.559078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.558 [2024-12-09 11:14:54.572118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.558 [2024-12-09 11:14:54.572633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.558 [2024-12-09 11:14:54.572703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.558 [2024-12-09 11:14:54.572739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.558 [2024-12-09 11:14:54.573333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.558 [2024-12-09 11:14:54.573832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.558 [2024-12-09 11:14:54.573850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.558 [2024-12-09 11:14:54.573866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.558 [2024-12-09 11:14:54.573880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.558 [2024-12-09 11:14:54.586665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.558 [2024-12-09 11:14:54.587158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.558 [2024-12-09 11:14:54.587186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.558 [2024-12-09 11:14:54.587202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.558 [2024-12-09 11:14:54.587462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.558 [2024-12-09 11:14:54.587732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.558 [2024-12-09 11:14:54.587751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.558 [2024-12-09 11:14:54.587766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.558 [2024-12-09 11:14:54.587780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.558 [2024-12-09 11:14:54.601045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.558 [2024-12-09 11:14:54.601557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.558 [2024-12-09 11:14:54.601584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.558 [2024-12-09 11:14:54.601600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.558 [2024-12-09 11:14:54.601868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.558 [2024-12-09 11:14:54.602132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.558 [2024-12-09 11:14:54.602149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.558 [2024-12-09 11:14:54.602164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.558 [2024-12-09 11:14:54.602178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.558 [2024-12-09 11:14:54.615474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.558 [2024-12-09 11:14:54.615936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.559 [2024-12-09 11:14:54.615965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.559 [2024-12-09 11:14:54.615982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.559 [2024-12-09 11:14:54.616242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.559 [2024-12-09 11:14:54.616505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.559 [2024-12-09 11:14:54.616523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.559 [2024-12-09 11:14:54.616538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.559 [2024-12-09 11:14:54.616552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.559 [2024-12-09 11:14:54.630080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.559 [2024-12-09 11:14:54.630509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.559 [2024-12-09 11:14:54.630535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.559 [2024-12-09 11:14:54.630551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.559 [2024-12-09 11:14:54.630819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.559 [2024-12-09 11:14:54.631082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.559 [2024-12-09 11:14:54.631100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.559 [2024-12-09 11:14:54.631115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.559 [2024-12-09 11:14:54.631129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.559 [2024-12-09 11:14:54.644653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.559 [2024-12-09 11:14:54.645148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.559 [2024-12-09 11:14:54.645174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.559 [2024-12-09 11:14:54.645190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.559 [2024-12-09 11:14:54.645450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.559 [2024-12-09 11:14:54.645720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.559 [2024-12-09 11:14:54.645739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.559 [2024-12-09 11:14:54.645754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.559 [2024-12-09 11:14:54.645768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.559 [2024-12-09 11:14:54.659046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.559 [2024-12-09 11:14:54.659554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.559 [2024-12-09 11:14:54.659581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.559 [2024-12-09 11:14:54.659601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.559 [2024-12-09 11:14:54.659869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.559 [2024-12-09 11:14:54.660131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.559 [2024-12-09 11:14:54.660148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.559 [2024-12-09 11:14:54.660164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.559 [2024-12-09 11:14:54.660177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.559 [2024-12-09 11:14:54.673474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.559 [2024-12-09 11:14:54.673943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.559 [2024-12-09 11:14:54.674000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.559 [2024-12-09 11:14:54.674036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.559 [2024-12-09 11:14:54.674630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.559 [2024-12-09 11:14:54.675196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.559 [2024-12-09 11:14:54.675214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.559 [2024-12-09 11:14:54.675229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.559 [2024-12-09 11:14:54.675243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.559 [2024-12-09 11:14:54.688019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.559 [2024-12-09 11:14:54.688531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.559 [2024-12-09 11:14:54.688557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.559 [2024-12-09 11:14:54.688573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.559 [2024-12-09 11:14:54.688841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.559 [2024-12-09 11:14:54.689105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.559 [2024-12-09 11:14:54.689123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.559 [2024-12-09 11:14:54.689138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.559 [2024-12-09 11:14:54.689152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.559 [2024-12-09 11:14:54.702431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.559 [2024-12-09 11:14:54.702930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.559 [2024-12-09 11:14:54.702958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.559 [2024-12-09 11:14:54.702973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.559 [2024-12-09 11:14:54.703234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.559 [2024-12-09 11:14:54.703500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.559 [2024-12-09 11:14:54.703518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.559 [2024-12-09 11:14:54.703534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.559 [2024-12-09 11:14:54.703548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.559 [2024-12-09 11:14:54.716826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.559 [2024-12-09 11:14:54.717342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.559 [2024-12-09 11:14:54.717369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.559 [2024-12-09 11:14:54.717385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.559 [2024-12-09 11:14:54.717653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.559 [2024-12-09 11:14:54.717917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.559 [2024-12-09 11:14:54.717935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.559 [2024-12-09 11:14:54.717950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.559 [2024-12-09 11:14:54.717964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.559 [2024-12-09 11:14:54.731245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.559 [2024-12-09 11:14:54.731698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.559 [2024-12-09 11:14:54.731725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.559 [2024-12-09 11:14:54.731741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.559 [2024-12-09 11:14:54.732002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.818 [2024-12-09 11:14:54.732264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.818 [2024-12-09 11:14:54.732285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.818 [2024-12-09 11:14:54.732301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.818 [2024-12-09 11:14:54.732315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.818 [2024-12-09 11:14:54.745853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.746296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.746323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.746339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.746599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.746871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.746889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.746908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.746922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.760448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.760930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.760991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.761027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.761623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.762164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.762182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.762198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.762212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.775020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.775530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.775557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.775573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.775840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.776103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.776120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.776136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.776149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.789427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.789877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.789905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.789920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.790181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.790444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.790462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.790477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.790491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.804027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.804521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.804549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.804564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.804833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.805097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.805115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.805129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.805143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.818413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.818930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.818958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.818974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.819234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.819496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.819514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.819528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.819542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.832825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.833334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.833361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.833377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.833636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.833907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.833926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.833941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.833955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.847232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.847753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.847809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.847852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.848303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.848565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.848583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.848598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.848612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.861671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.862124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.862152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.862168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.862428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.862699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.862718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.862733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.862747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.876268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.876777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.876835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.876870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.877347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.877610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.877628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.877651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.877666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.890706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.891155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.891183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.891199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.891459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.891738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.891757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.891772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.891786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.905292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.905748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.905777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.905793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.906053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.906315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.906333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.906348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.906362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.919879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.920317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.920344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.920360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.920620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.920890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.920909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.920924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.920938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.934455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.934977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.935005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.935021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.935281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.935543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.935561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.935580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.935594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.948885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.949387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.949443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.949479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.949929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.950194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.950212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.950227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.950241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.963265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.963704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.963732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.963748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.964008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.964271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.964289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.964305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.964319] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.819 [2024-12-09 11:14:54.977872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.819 [2024-12-09 11:14:54.978395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.819 [2024-12-09 11:14:54.978450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.819 [2024-12-09 11:14:54.978485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.819 [2024-12-09 11:14:54.978974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.819 [2024-12-09 11:14:54.979237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.819 [2024-12-09 11:14:54.979255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.819 [2024-12-09 11:14:54.979270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.819 [2024-12-09 11:14:54.979284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:53.820 [2024-12-09 11:14:54.992321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:53.820 [2024-12-09 11:14:54.992792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:53.820 [2024-12-09 11:14:54.992820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:53.820 [2024-12-09 11:14:54.992836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:53.820 [2024-12-09 11:14:54.993096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:53.820 [2024-12-09 11:14:54.993359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:53.820 [2024-12-09 11:14:54.993377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:53.820 [2024-12-09 11:14:54.993392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:53.820 [2024-12-09 11:14:54.993406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.078 [2024-12-09 11:14:55.006940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.078 [2024-12-09 11:14:55.007456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.078 [2024-12-09 11:14:55.007515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.078 [2024-12-09 11:14:55.007551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.078 [2024-12-09 11:14:55.008166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.078 [2024-12-09 11:14:55.008781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.078 [2024-12-09 11:14:55.008818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.078 [2024-12-09 11:14:55.008833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.078 [2024-12-09 11:14:55.008848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.078 [2024-12-09 11:14:55.021391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.078 [2024-12-09 11:14:55.021898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.078 [2024-12-09 11:14:55.021926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.078 [2024-12-09 11:14:55.021942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.078 [2024-12-09 11:14:55.022202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.078 [2024-12-09 11:14:55.022464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.078 [2024-12-09 11:14:55.022483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.078 [2024-12-09 11:14:55.022498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.078 [2024-12-09 11:14:55.022512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.078 [2024-12-09 11:14:55.035795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.078 [2024-12-09 11:14:55.036325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.078 [2024-12-09 11:14:55.036380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.078 [2024-12-09 11:14:55.036423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.078 [2024-12-09 11:14:55.036932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.078 [2024-12-09 11:14:55.037196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.078 [2024-12-09 11:14:55.037214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.078 [2024-12-09 11:14:55.037229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.078 [2024-12-09 11:14:55.037242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.078 [2024-12-09 11:14:55.050273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.078 [2024-12-09 11:14:55.050754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.079 [2024-12-09 11:14:55.050781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.079 [2024-12-09 11:14:55.050797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.079 [2024-12-09 11:14:55.051058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.079 [2024-12-09 11:14:55.051320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.079 [2024-12-09 11:14:55.051338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.079 [2024-12-09 11:14:55.051354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.079 [2024-12-09 11:14:55.051368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.079 [2024-12-09 11:14:55.064891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.079 [2024-12-09 11:14:55.065339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.079 [2024-12-09 11:14:55.065365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.079 [2024-12-09 11:14:55.065381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.079 [2024-12-09 11:14:55.065639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.079 [2024-12-09 11:14:55.065910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.079 [2024-12-09 11:14:55.065928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.079 [2024-12-09 11:14:55.065943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.079 [2024-12-09 11:14:55.065957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.079 [2024-12-09 11:14:55.079465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.079 [2024-12-09 11:14:55.079978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.079 [2024-12-09 11:14:55.080006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.079 [2024-12-09 11:14:55.080022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.079 [2024-12-09 11:14:55.080282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.079 [2024-12-09 11:14:55.080548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.079 [2024-12-09 11:14:55.080566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.079 [2024-12-09 11:14:55.080581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.079 [2024-12-09 11:14:55.080595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.079 [2024-12-09 11:14:55.093881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.079 [2024-12-09 11:14:55.094322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.079 [2024-12-09 11:14:55.094350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.079 [2024-12-09 11:14:55.094365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.079 [2024-12-09 11:14:55.094625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.079 [2024-12-09 11:14:55.095050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.079 [2024-12-09 11:14:55.095069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.079 [2024-12-09 11:14:55.095085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.079 [2024-12-09 11:14:55.095099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.079 [2024-12-09 11:14:55.108371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.079 [2024-12-09 11:14:55.108811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.079 [2024-12-09 11:14:55.108839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.079 [2024-12-09 11:14:55.108855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.079 [2024-12-09 11:14:55.109114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.079 [2024-12-09 11:14:55.109377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.079 [2024-12-09 11:14:55.109395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.079 [2024-12-09 11:14:55.109410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.079 [2024-12-09 11:14:55.109424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.079 [2024-12-09 11:14:55.122946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.079 [2024-12-09 11:14:55.123460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.079 [2024-12-09 11:14:55.123487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.079 [2024-12-09 11:14:55.123503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.079 [2024-12-09 11:14:55.123771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.079 [2024-12-09 11:14:55.124033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.079 [2024-12-09 11:14:55.124051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.079 [2024-12-09 11:14:55.124070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.079 [2024-12-09 11:14:55.124085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.079 [2024-12-09 11:14:55.137361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.079 [2024-12-09 11:14:55.137791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.079 [2024-12-09 11:14:55.137819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.079 [2024-12-09 11:14:55.137835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.079 [2024-12-09 11:14:55.138095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.079 [2024-12-09 11:14:55.138359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.079 [2024-12-09 11:14:55.138376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.079 [2024-12-09 11:14:55.138391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.079 [2024-12-09 11:14:55.138405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.079 [2024-12-09 11:14:55.151926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.079 [2024-12-09 11:14:55.152370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.079 [2024-12-09 11:14:55.152426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.079 [2024-12-09 11:14:55.152461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.079 [2024-12-09 11:14:55.152987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.079 [2024-12-09 11:14:55.153252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.079 [2024-12-09 11:14:55.153270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.079 [2024-12-09 11:14:55.153284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.079 [2024-12-09 11:14:55.153298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.079 [2024-12-09 11:14:55.166331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.079 [2024-12-09 11:14:55.166766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.079 [2024-12-09 11:14:55.166794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.079 [2024-12-09 11:14:55.166809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.079 [2024-12-09 11:14:55.167069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.079 [2024-12-09 11:14:55.167332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.080 [2024-12-09 11:14:55.167350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.080 [2024-12-09 11:14:55.167365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.080 [2024-12-09 11:14:55.167379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.080 [2024-12-09 11:14:55.180905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.080 [2024-12-09 11:14:55.181385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.080 [2024-12-09 11:14:55.181411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.080 [2024-12-09 11:14:55.181427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.080 [2024-12-09 11:14:55.181695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.080 [2024-12-09 11:14:55.181966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.080 [2024-12-09 11:14:55.181984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.080 [2024-12-09 11:14:55.181999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.080 [2024-12-09 11:14:55.182012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.080 [2024-12-09 11:14:55.195524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.080 [2024-12-09 11:14:55.196048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.080 [2024-12-09 11:14:55.196078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.080 [2024-12-09 11:14:55.196094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.080 [2024-12-09 11:14:55.196357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.080 [2024-12-09 11:14:55.196621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.080 [2024-12-09 11:14:55.196639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.080 [2024-12-09 11:14:55.196671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.080 [2024-12-09 11:14:55.196685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.080 [2024-12-09 11:14:55.210022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.080 [2024-12-09 11:14:55.210560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.080 [2024-12-09 11:14:55.210618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.080 [2024-12-09 11:14:55.210674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.080 [2024-12-09 11:14:55.211153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.080 [2024-12-09 11:14:55.211415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.080 [2024-12-09 11:14:55.211433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.080 [2024-12-09 11:14:55.211448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.080 [2024-12-09 11:14:55.211463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.080 [2024-12-09 11:14:55.224504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.080 [2024-12-09 11:14:55.225009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.080 [2024-12-09 11:14:55.225038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.080 [2024-12-09 11:14:55.225058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.080 [2024-12-09 11:14:55.225318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.080 [2024-12-09 11:14:55.225581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.080 [2024-12-09 11:14:55.225598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.080 [2024-12-09 11:14:55.225614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.080 [2024-12-09 11:14:55.225628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.080 [2024-12-09 11:14:55.238919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.080 [2024-12-09 11:14:55.239429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.080 [2024-12-09 11:14:55.239457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.080 [2024-12-09 11:14:55.239473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.080 [2024-12-09 11:14:55.239740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.080 [2024-12-09 11:14:55.240004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.080 [2024-12-09 11:14:55.240022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.080 [2024-12-09 11:14:55.240037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.080 [2024-12-09 11:14:55.240051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.253324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.253837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.253894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.253930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.254431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.254700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.254719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.254735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.254749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.267784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.268277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.268305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.268321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.268580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.268860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.268885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.268900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.268915] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.282188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.282702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.282730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.282746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.283007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.283269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.283287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.283302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.283316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.296607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.297118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.297146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.297161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.297422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.297692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.297710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.297725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.297739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.311013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.311521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.311549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.311564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.311833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.312097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.312115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.312130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.312148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.325437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.325880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.325907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.325923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.326183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.326444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.326463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.326478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.326492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.340009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.340519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.340547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.340562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.340830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.341095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.341113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.341128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.341142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.354409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.354853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.354881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.354896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.355157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.355420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.355439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.355454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.355468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.368992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.369518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.369546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.369561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.369830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.370094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.370112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.370127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.370141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.383420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.383929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.383985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.384020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.384613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.385036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.385055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.385070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.385084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.397867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.398321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.398348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.398364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.398624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.398896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.398915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.398931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.398946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.412457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.412930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.412987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.413022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.413625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.414093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.414111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.414126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.414140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.426932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.427431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.427458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.427474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.427743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.428006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.428024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.428039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.428053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.441322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.441832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.441861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.441877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.442138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.442400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.442418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.442433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.340 [2024-12-09 11:14:55.442447] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.340 [2024-12-09 11:14:55.455741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.340 [2024-12-09 11:14:55.456175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.340 [2024-12-09 11:14:55.456203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.340 [2024-12-09 11:14:55.456218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.340 [2024-12-09 11:14:55.456478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.340 [2024-12-09 11:14:55.456749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.340 [2024-12-09 11:14:55.456772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.340 [2024-12-09 11:14:55.456787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.341 [2024-12-09 11:14:55.456802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.341 [2024-12-09 11:14:55.470337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.341 [2024-12-09 11:14:55.470788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.341 [2024-12-09 11:14:55.470815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.341 [2024-12-09 11:14:55.470831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.341 [2024-12-09 11:14:55.471091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.341 [2024-12-09 11:14:55.471354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.341 [2024-12-09 11:14:55.471371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.341 [2024-12-09 11:14:55.471386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.341 [2024-12-09 11:14:55.471400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.341 [2024-12-09 11:14:55.484926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.341 [2024-12-09 11:14:55.485358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.341 [2024-12-09 11:14:55.485385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.341 [2024-12-09 11:14:55.485401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.341 [2024-12-09 11:14:55.485669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.341 [2024-12-09 11:14:55.485932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.341 [2024-12-09 11:14:55.485950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.341 [2024-12-09 11:14:55.485965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.341 [2024-12-09 11:14:55.485979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.341 5329.80 IOPS, 20.82 MiB/s [2024-12-09T10:14:55.517Z] [2024-12-09 11:14:55.501416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.341 [2024-12-09 11:14:55.501856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.341 [2024-12-09 11:14:55.501884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.341 [2024-12-09 11:14:55.501899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.341 [2024-12-09 11:14:55.502159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.341 [2024-12-09 11:14:55.502422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.341 [2024-12-09 11:14:55.502440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.341 [2024-12-09 11:14:55.502455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.341 [2024-12-09 11:14:55.502473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.602 [2024-12-09 11:14:55.516022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.602 [2024-12-09 11:14:55.516534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.602 [2024-12-09 11:14:55.516561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.602 [2024-12-09 11:14:55.516577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.602 [2024-12-09 11:14:55.516845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.602 [2024-12-09 11:14:55.517109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.602 [2024-12-09 11:14:55.517127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.602 [2024-12-09 11:14:55.517142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.602 [2024-12-09 11:14:55.517156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.602 [2024-12-09 11:14:55.530439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.602 [2024-12-09 11:14:55.530940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.602 [2024-12-09 11:14:55.530968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.602 [2024-12-09 11:14:55.530984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.602 [2024-12-09 11:14:55.531244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.602 [2024-12-09 11:14:55.531507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.602 [2024-12-09 11:14:55.531525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.602 [2024-12-09 11:14:55.531541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.602 [2024-12-09 11:14:55.531555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.602 [2024-12-09 11:14:55.544847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.602 [2024-12-09 11:14:55.545344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.602 [2024-12-09 11:14:55.545372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.602 [2024-12-09 11:14:55.545390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.602 [2024-12-09 11:14:55.545660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.602 [2024-12-09 11:14:55.545925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.602 [2024-12-09 11:14:55.545944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.602 [2024-12-09 11:14:55.545959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.602 [2024-12-09 11:14:55.545973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.602 [2024-12-09 11:14:55.559273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.602 [2024-12-09 11:14:55.559781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.602 [2024-12-09 11:14:55.559810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.602 [2024-12-09 11:14:55.559826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.602 [2024-12-09 11:14:55.560087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.602 [2024-12-09 11:14:55.560350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.602 [2024-12-09 11:14:55.560369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.602 [2024-12-09 11:14:55.560384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.602 [2024-12-09 11:14:55.560398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.602 [2024-12-09 11:14:55.573694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.602 [2024-12-09 11:14:55.574133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.602 [2024-12-09 11:14:55.574161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.602 [2024-12-09 11:14:55.574177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.602 [2024-12-09 11:14:55.574436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.602 [2024-12-09 11:14:55.574706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.602 [2024-12-09 11:14:55.574725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.602 [2024-12-09 11:14:55.574741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.602 [2024-12-09 11:14:55.574755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.602 [2024-12-09 11:14:55.588270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.602 [2024-12-09 11:14:55.588743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.602 [2024-12-09 11:14:55.588772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.602 [2024-12-09 11:14:55.588788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.602 [2024-12-09 11:14:55.589048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.602 [2024-12-09 11:14:55.589310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.602 [2024-12-09 11:14:55.589328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.602 [2024-12-09 11:14:55.589344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.602 [2024-12-09 11:14:55.589358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.602 [2024-12-09 11:14:55.602877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.602 [2024-12-09 11:14:55.603352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.602 [2024-12-09 11:14:55.603379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.602 [2024-12-09 11:14:55.603395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.602 [2024-12-09 11:14:55.603670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.602 [2024-12-09 11:14:55.603935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.602 [2024-12-09 11:14:55.603954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.602 [2024-12-09 11:14:55.603969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.603 [2024-12-09 11:14:55.603983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.603 [2024-12-09 11:14:55.617481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.603 [2024-12-09 11:14:55.618008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.603 [2024-12-09 11:14:55.618036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.603 [2024-12-09 11:14:55.618051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.603 [2024-12-09 11:14:55.618310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.603 [2024-12-09 11:14:55.618573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.603 [2024-12-09 11:14:55.618592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.603 [2024-12-09 11:14:55.618607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.603 [2024-12-09 11:14:55.618622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.603 [2024-12-09 11:14:55.631897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.603 [2024-12-09 11:14:55.632420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.603 [2024-12-09 11:14:55.632448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.603 [2024-12-09 11:14:55.632464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.603 [2024-12-09 11:14:55.632731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.603 [2024-12-09 11:14:55.632996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.603 [2024-12-09 11:14:55.633015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.603 [2024-12-09 11:14:55.633030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.603 [2024-12-09 11:14:55.633045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.603 [2024-12-09 11:14:55.646327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.603 [2024-12-09 11:14:55.646850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.603 [2024-12-09 11:14:55.646878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.603 [2024-12-09 11:14:55.646895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.603 [2024-12-09 11:14:55.647154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.603 [2024-12-09 11:14:55.647418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.603 [2024-12-09 11:14:55.647441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.603 [2024-12-09 11:14:55.647456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.603 [2024-12-09 11:14:55.647470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.603 [2024-12-09 11:14:55.660770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.603 [2024-12-09 11:14:55.661277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.603 [2024-12-09 11:14:55.661305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.603 [2024-12-09 11:14:55.661321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.603 [2024-12-09 11:14:55.661582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.603 [2024-12-09 11:14:55.661854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.603 [2024-12-09 11:14:55.661875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.603 [2024-12-09 11:14:55.661892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.603 [2024-12-09 11:14:55.661906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.603 [2024-12-09 11:14:55.675219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.603 [2024-12-09 11:14:55.675676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.603 [2024-12-09 11:14:55.675704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.603 [2024-12-09 11:14:55.675722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.603 [2024-12-09 11:14:55.675984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.603 [2024-12-09 11:14:55.676250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.603 [2024-12-09 11:14:55.676270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.603 [2024-12-09 11:14:55.676287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.603 [2024-12-09 11:14:55.676302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.603 [2024-12-09 11:14:55.689613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.603 [2024-12-09 11:14:55.690014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.603 [2024-12-09 11:14:55.690043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.603 [2024-12-09 11:14:55.690059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.603 [2024-12-09 11:14:55.690319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.603 [2024-12-09 11:14:55.690583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.603 [2024-12-09 11:14:55.690602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.603 [2024-12-09 11:14:55.690618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.603 [2024-12-09 11:14:55.690636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.603 [2024-12-09 11:14:55.704202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.603 [2024-12-09 11:14:55.704673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.603 [2024-12-09 11:14:55.704702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.603 [2024-12-09 11:14:55.704718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.603 [2024-12-09 11:14:55.704978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.603 [2024-12-09 11:14:55.705242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.603 [2024-12-09 11:14:55.705261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.603 [2024-12-09 11:14:55.705276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.603 [2024-12-09 11:14:55.705290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.603 [2024-12-09 11:14:55.718825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.603 [2024-12-09 11:14:55.719275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.603 [2024-12-09 11:14:55.719303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.603 [2024-12-09 11:14:55.719319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.603 [2024-12-09 11:14:55.719581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.603 [2024-12-09 11:14:55.719854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.603 [2024-12-09 11:14:55.719873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.603 [2024-12-09 11:14:55.719889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.603 [2024-12-09 11:14:55.719903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.603 [2024-12-09 11:14:55.733437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.603 [2024-12-09 11:14:55.733937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.603 [2024-12-09 11:14:55.733965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.603 [2024-12-09 11:14:55.733981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.603 [2024-12-09 11:14:55.734241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.603 [2024-12-09 11:14:55.734505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.603 [2024-12-09 11:14:55.734523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.603 [2024-12-09 11:14:55.734539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.603 [2024-12-09 11:14:55.734553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.603 [2024-12-09 11:14:55.747864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.603 [2024-12-09 11:14:55.748423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.603 [2024-12-09 11:14:55.748487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.603 [2024-12-09 11:14:55.748523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.603 [2024-12-09 11:14:55.749131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.603 [2024-12-09 11:14:55.749635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.603 [2024-12-09 11:14:55.749659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.603 [2024-12-09 11:14:55.749675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.604 [2024-12-09 11:14:55.749690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.604 [2024-12-09 11:14:55.762496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.604 [2024-12-09 11:14:55.763015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.604 [2024-12-09 11:14:55.763043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.604 [2024-12-09 11:14:55.763060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.604 [2024-12-09 11:14:55.763320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.604 [2024-12-09 11:14:55.763583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.604 [2024-12-09 11:14:55.763601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.604 [2024-12-09 11:14:55.763617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.604 [2024-12-09 11:14:55.763631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.864 [2024-12-09 11:14:55.776954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.865 [2024-12-09 11:14:55.777471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.865 [2024-12-09 11:14:55.777500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.865 [2024-12-09 11:14:55.777515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.865 [2024-12-09 11:14:55.777786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.865 [2024-12-09 11:14:55.778051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.865 [2024-12-09 11:14:55.778070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.865 [2024-12-09 11:14:55.778085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.865 [2024-12-09 11:14:55.778100] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.865 [2024-12-09 11:14:55.791405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.865 [2024-12-09 11:14:55.791879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.865 [2024-12-09 11:14:55.791908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.865 [2024-12-09 11:14:55.791925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.865 [2024-12-09 11:14:55.792189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.865 [2024-12-09 11:14:55.792453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.865 [2024-12-09 11:14:55.792471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.865 [2024-12-09 11:14:55.792487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.865 [2024-12-09 11:14:55.792501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.865 [2024-12-09 11:14:55.805811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.865 [2024-12-09 11:14:55.806255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.865 [2024-12-09 11:14:55.806283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.865 [2024-12-09 11:14:55.806300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.865 [2024-12-09 11:14:55.806560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.865 [2024-12-09 11:14:55.806834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.865 [2024-12-09 11:14:55.806854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.865 [2024-12-09 11:14:55.806870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.865 [2024-12-09 11:14:55.806884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.865 [2024-12-09 11:14:55.820437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.865 [2024-12-09 11:14:55.820953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.865 [2024-12-09 11:14:55.820981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.865 [2024-12-09 11:14:55.820998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.865 [2024-12-09 11:14:55.821257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.865 [2024-12-09 11:14:55.821520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.865 [2024-12-09 11:14:55.821539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.865 [2024-12-09 11:14:55.821554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.865 [2024-12-09 11:14:55.821569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.865 [2024-12-09 11:14:55.834885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.865 [2024-12-09 11:14:55.835434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.865 [2024-12-09 11:14:55.835501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.865 [2024-12-09 11:14:55.835537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.865 [2024-12-09 11:14:55.836150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.865 [2024-12-09 11:14:55.836753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.865 [2024-12-09 11:14:55.836776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.865 [2024-12-09 11:14:55.836792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.865 [2024-12-09 11:14:55.836807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.865 [2024-12-09 11:14:55.849377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.865 [2024-12-09 11:14:55.849887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.865 [2024-12-09 11:14:55.849916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.865 [2024-12-09 11:14:55.849932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.865 [2024-12-09 11:14:55.850192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.865 [2024-12-09 11:14:55.850457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.865 [2024-12-09 11:14:55.850475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.865 [2024-12-09 11:14:55.850490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.865 [2024-12-09 11:14:55.850505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.865 [2024-12-09 11:14:55.863785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.865 [2024-12-09 11:14:55.864191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.865 [2024-12-09 11:14:55.864219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.865 [2024-12-09 11:14:55.864235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.865 [2024-12-09 11:14:55.864496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.865 [2024-12-09 11:14:55.864765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.865 [2024-12-09 11:14:55.864785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.865 [2024-12-09 11:14:55.864800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.865 [2024-12-09 11:14:55.864814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.865 [2024-12-09 11:14:55.878343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.865 [2024-12-09 11:14:55.878859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.865 [2024-12-09 11:14:55.878886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.865 [2024-12-09 11:14:55.878902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.865 [2024-12-09 11:14:55.879161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.865 [2024-12-09 11:14:55.879424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.865 [2024-12-09 11:14:55.879443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.865 [2024-12-09 11:14:55.879458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.865 [2024-12-09 11:14:55.879473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.865 [2024-12-09 11:14:55.892761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.865 [2024-12-09 11:14:55.893224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.865 [2024-12-09 11:14:55.893252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.865 [2024-12-09 11:14:55.893268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.865 [2024-12-09 11:14:55.893528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.865 [2024-12-09 11:14:55.893809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.865 [2024-12-09 11:14:55.893829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.865 [2024-12-09 11:14:55.893845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.865 [2024-12-09 11:14:55.893859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.865 [2024-12-09 11:14:55.907373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.865 [2024-12-09 11:14:55.907900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.865 [2024-12-09 11:14:55.907928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.865 [2024-12-09 11:14:55.907944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.865 [2024-12-09 11:14:55.908203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.866 [2024-12-09 11:14:55.908463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.866 [2024-12-09 11:14:55.908482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.866 [2024-12-09 11:14:55.908498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.866 [2024-12-09 11:14:55.908512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.866 [2024-12-09 11:14:55.921788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.866 [2024-12-09 11:14:55.922315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.866 [2024-12-09 11:14:55.922343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.866 [2024-12-09 11:14:55.922359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.866 [2024-12-09 11:14:55.922621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.866 [2024-12-09 11:14:55.922895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.866 [2024-12-09 11:14:55.922915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.866 [2024-12-09 11:14:55.922932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.866 [2024-12-09 11:14:55.922948] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.866 [2024-12-09 11:14:55.936236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.866 [2024-12-09 11:14:55.936698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.866 [2024-12-09 11:14:55.936750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.866 [2024-12-09 11:14:55.936786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.866 [2024-12-09 11:14:55.937380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.866 [2024-12-09 11:14:55.937989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.866 [2024-12-09 11:14:55.938009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.866 [2024-12-09 11:14:55.938024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.866 [2024-12-09 11:14:55.938038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.866 [2024-12-09 11:14:55.950834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.866 [2024-12-09 11:14:55.951306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.866 [2024-12-09 11:14:55.951362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.866 [2024-12-09 11:14:55.951397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.866 [2024-12-09 11:14:55.951874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.866 [2024-12-09 11:14:55.952246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.866 [2024-12-09 11:14:55.952274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.866 [2024-12-09 11:14:55.952297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.866 [2024-12-09 11:14:55.952318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.866 [2024-12-09 11:14:55.965659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.866 [2024-12-09 11:14:55.966197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.866 [2024-12-09 11:14:55.966252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.866 [2024-12-09 11:14:55.966287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.866 [2024-12-09 11:14:55.966901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.866 [2024-12-09 11:14:55.967302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.866 [2024-12-09 11:14:55.967321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.866 [2024-12-09 11:14:55.967336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.866 [2024-12-09 11:14:55.967351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.866 [2024-12-09 11:14:55.980149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.866 [2024-12-09 11:14:55.980678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.866 [2024-12-09 11:14:55.980705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.866 [2024-12-09 11:14:55.980721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.866 [2024-12-09 11:14:55.980985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.866 [2024-12-09 11:14:55.981247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.866 [2024-12-09 11:14:55.981265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.866 [2024-12-09 11:14:55.981281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.866 [2024-12-09 11:14:55.981296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.866 [2024-12-09 11:14:55.994571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.866 [2024-12-09 11:14:55.995099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.866 [2024-12-09 11:14:55.995127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.866 [2024-12-09 11:14:55.995143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.866 [2024-12-09 11:14:55.995402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.866 [2024-12-09 11:14:55.995671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.866 [2024-12-09 11:14:55.995691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.866 [2024-12-09 11:14:55.995707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.866 [2024-12-09 11:14:55.995720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.866 [2024-12-09 11:14:56.009013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.866 [2024-12-09 11:14:56.009535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.866 [2024-12-09 11:14:56.009563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.866 [2024-12-09 11:14:56.009578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.866 [2024-12-09 11:14:56.009846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.866 [2024-12-09 11:14:56.010110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.866 [2024-12-09 11:14:56.010129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.866 [2024-12-09 11:14:56.010144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.866 [2024-12-09 11:14:56.010159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.866 [2024-12-09 11:14:56.023427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.866 [2024-12-09 11:14:56.023957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.866 [2024-12-09 11:14:56.023985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.866 [2024-12-09 11:14:56.024001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.866 [2024-12-09 11:14:56.024261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:54.866 [2024-12-09 11:14:56.024524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:54.866 [2024-12-09 11:14:56.024543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:54.866 [2024-12-09 11:14:56.024562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:54.866 [2024-12-09 11:14:56.024577] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:54.866 [2024-12-09 11:14:56.037860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:54.866 [2024-12-09 11:14:56.038380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:54.866 [2024-12-09 11:14:56.038407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:54.866 [2024-12-09 11:14:56.038423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:54.866 [2024-12-09 11:14:56.038692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.127 [2024-12-09 11:14:56.038956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.127 [2024-12-09 11:14:56.038978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.127 [2024-12-09 11:14:56.038993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.127 [2024-12-09 11:14:56.039008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2551837 Killed "${NVMF_APP[@]}" "$@" 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:55.127 [2024-12-09 11:14:56.052288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:55.127 [2024-12-09 11:14:56.052801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.127 [2024-12-09 11:14:56.052859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:55.127 [2024-12-09 11:14:56.052896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.127 [2024-12-09 11:14:56.053379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.127 [2024-12-09 11:14:56.053653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.127 [2024-12-09 11:14:56.053673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.127 [2024-12-09 11:14:56.053688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.127 [2024-12-09 11:14:56.053703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2552930 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2552930 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2552930 ']' 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:55.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:55.127 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:55.127 [2024-12-09 11:14:56.066777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.127 [2024-12-09 11:14:56.067261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.127 [2024-12-09 11:14:56.067290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.127 [2024-12-09 11:14:56.067307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.127 [2024-12-09 11:14:56.067566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.127 [2024-12-09 11:14:56.067836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.127 [2024-12-09 11:14:56.067856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.127 [2024-12-09 11:14:56.067872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.127 [2024-12-09 11:14:56.067887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.127 [2024-12-09 11:14:56.081198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.127 [2024-12-09 11:14:56.081679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.127 [2024-12-09 11:14:56.081708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.127 [2024-12-09 11:14:56.081724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.127 [2024-12-09 11:14:56.081985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.127 [2024-12-09 11:14:56.082250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.127 [2024-12-09 11:14:56.082269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.127 [2024-12-09 11:14:56.082285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.127 [2024-12-09 11:14:56.082299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.127 [2024-12-09 11:14:56.095615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.127 [2024-12-09 11:14:56.096065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.127 [2024-12-09 11:14:56.096094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.127 [2024-12-09 11:14:56.096111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.127 [2024-12-09 11:14:56.096373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.127 [2024-12-09 11:14:56.096637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.127 [2024-12-09 11:14:56.096671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.127 [2024-12-09 11:14:56.096687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.127 [2024-12-09 11:14:56.096706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.127 [2024-12-09 11:14:56.110237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.127 [2024-12-09 11:14:56.110686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.127 [2024-12-09 11:14:56.110714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.127 [2024-12-09 11:14:56.110731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.127 [2024-12-09 11:14:56.110992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.127 [2024-12-09 11:14:56.111253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.127 [2024-12-09 11:14:56.111272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.127 [2024-12-09 11:14:56.111287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.111301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.122971] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:03:55.128 [2024-12-09 11:14:56.123047] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:55.128 [2024-12-09 11:14:56.124783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.125239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.125295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.125331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.125944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.126476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.126496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.126512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.126528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.139337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.139865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.139893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.139911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.140170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.140433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.140452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.140473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.140488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.153789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.154268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.154296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.154312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.154573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.154844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.154863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.154879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.154893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.168403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.168931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.168959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.168975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.169235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.169498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.169517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.169532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.169548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.182841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.183270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.183297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.183315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.183575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.183844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.183864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.183880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.183894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.197435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.197975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.198002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.198019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.198279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.198842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.198863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.198879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.198893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.211981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.212484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.212512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.212529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.212798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.213062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.213080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.213095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.213109] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.226378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.226895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.226923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.226940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.227199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.227461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.227480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.227495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.227509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.229420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:03:55.128 [2024-12-09 11:14:56.240816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.241375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.241408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.241433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.241703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.241970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.241988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.242004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.242020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.255304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.255802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.255830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.255847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.256108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.256371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.256389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.256405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.256419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.269704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.270159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.270187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.270203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.270465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.270739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.270758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.270774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.270789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.271815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:55.128 [2024-12-09 11:14:56.271852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:55.128 [2024-12-09 11:14:56.271863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:55.128 [2024-12-09 11:14:56.271874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:55.128 [2024-12-09 11:14:56.271883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:55.128 [2024-12-09 11:14:56.273094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:55.128 [2024-12-09 11:14:56.273183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:55.128 [2024-12-09 11:14:56.273185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:55.128 [2024-12-09 11:14:56.284108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.284670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.284705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.284722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.284983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.285247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.285265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.285282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.285297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.128 [2024-12-09 11:14:56.298608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.128 [2024-12-09 11:14:56.299175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.128 [2024-12-09 11:14:56.299209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.128 [2024-12-09 11:14:56.299226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.128 [2024-12-09 11:14:56.299488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.128 [2024-12-09 11:14:56.299759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.128 [2024-12-09 11:14:56.299778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.128 [2024-12-09 11:14:56.299795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.128 [2024-12-09 11:14:56.299810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.396 [2024-12-09 11:14:56.313097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.396 [2024-12-09 11:14:56.313504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.396 [2024-12-09 11:14:56.313535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.396 [2024-12-09 11:14:56.313553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.396 [2024-12-09 11:14:56.313822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.396 [2024-12-09 11:14:56.314089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.396 [2024-12-09 11:14:56.314109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.396 [2024-12-09 11:14:56.314125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.396 [2024-12-09 11:14:56.314140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.396 [2024-12-09 11:14:56.327684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.396 [2024-12-09 11:14:56.328196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.396 [2024-12-09 11:14:56.328228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.396 [2024-12-09 11:14:56.328245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.396 [2024-12-09 11:14:56.328506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.396 [2024-12-09 11:14:56.328779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.396 [2024-12-09 11:14:56.328798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.396 [2024-12-09 11:14:56.328814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.396 [2024-12-09 11:14:56.328830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.396 [2024-12-09 11:14:56.342113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.396 [2024-12-09 11:14:56.342641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.396 [2024-12-09 11:14:56.342677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.396 [2024-12-09 11:14:56.342694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.396 [2024-12-09 11:14:56.342954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.396 [2024-12-09 11:14:56.343217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.396 [2024-12-09 11:14:56.343235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.396 [2024-12-09 11:14:56.343251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.396 [2024-12-09 11:14:56.343265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.396 [2024-12-09 11:14:56.356552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.396 [2024-12-09 11:14:56.357094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.396 [2024-12-09 11:14:56.357122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.396 [2024-12-09 11:14:56.357138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.396 [2024-12-09 11:14:56.357398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.396 [2024-12-09 11:14:56.357668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.396 [2024-12-09 11:14:56.357687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.396 [2024-12-09 11:14:56.357702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.396 [2024-12-09 11:14:56.357717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.396 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:55.396 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 01:03:55.396 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:55.396 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:55.396 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:55.396 [2024-12-09 11:14:56.370991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.396 [2024-12-09 11:14:56.371437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.396 [2024-12-09 11:14:56.371464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.396 [2024-12-09 11:14:56.371480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.396 [2024-12-09 11:14:56.371747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.396 [2024-12-09 11:14:56.372010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.396 [2024-12-09 11:14:56.372029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.396 [2024-12-09 11:14:56.372044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.396 [2024-12-09 11:14:56.372058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.396 [2024-12-09 11:14:56.385585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.396 [2024-12-09 11:14:56.386042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.396 [2024-12-09 11:14:56.386070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.396 [2024-12-09 11:14:56.386086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.396 [2024-12-09 11:14:56.386346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.396 [2024-12-09 11:14:56.386609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.396 [2024-12-09 11:14:56.386627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.396 [2024-12-09 11:14:56.386643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.396 [2024-12-09 11:14:56.386665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.396 [2024-12-09 11:14:56.400183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.396 [2024-12-09 11:14:56.400706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.396 [2024-12-09 11:14:56.400738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.396 [2024-12-09 11:14:56.400754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.396 [2024-12-09 11:14:56.401014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.396 [2024-12-09 11:14:56.401276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.396 [2024-12-09 11:14:56.401294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.396 [2024-12-09 11:14:56.401310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.396 [2024-12-09 11:14:56.401324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.396 [2024-12-09 11:14:56.414617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.396 [2024-12-09 11:14:56.415076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.396 [2024-12-09 11:14:56.415114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.396 [2024-12-09 11:14:56.415130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.396 [2024-12-09 11:14:56.415391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.397 [2024-12-09 11:14:56.415662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.397 [2024-12-09 11:14:56.415681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.397 [2024-12-09 11:14:56.415696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.397 [2024-12-09 11:14:56.415710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:55.397 [2024-12-09 11:14:56.423894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:55.397 [2024-12-09 11:14:56.429224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:55.397 [2024-12-09 11:14:56.429763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.397 [2024-12-09 11:14:56.429791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.397 [2024-12-09 11:14:56.429807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.397 [2024-12-09 11:14:56.430068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:03:55.397 [2024-12-09 11:14:56.430332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.397 [2024-12-09 11:14:56.430350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.397 [2024-12-09 11:14:56.430366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.397 [2024-12-09 11:14:56.430380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:55.397 [2024-12-09 11:14:56.443669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.397 [2024-12-09 11:14:56.444168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.397 [2024-12-09 11:14:56.444197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.397 [2024-12-09 11:14:56.444214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.397 [2024-12-09 11:14:56.444473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.397 [2024-12-09 11:14:56.444745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.397 [2024-12-09 11:14:56.444765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.397 [2024-12-09 11:14:56.444786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.397 [2024-12-09 11:14:56.444800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.397 [2024-12-09 11:14:56.458089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.397 [2024-12-09 11:14:56.458565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.397 [2024-12-09 11:14:56.458594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.397 [2024-12-09 11:14:56.458611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.397 [2024-12-09 11:14:56.458882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.397 [2024-12-09 11:14:56.459147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.397 [2024-12-09 11:14:56.459166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.397 [2024-12-09 11:14:56.459181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.397 [2024-12-09 11:14:56.459197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.397 Malloc0 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:55.397 [2024-12-09 11:14:56.472715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.397 [2024-12-09 11:14:56.473247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:03:55.397 [2024-12-09 11:14:56.473275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea1a20 with addr=10.0.0.2, port=4420 01:03:55.397 [2024-12-09 11:14:56.473291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1a20 is same with the state(6) to be set 01:03:55.397 [2024-12-09 11:14:56.473563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea1a20 (9): Bad file descriptor 01:03:55.397 [2024-12-09 11:14:56.473835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:03:55.397 [2024-12-09 11:14:56.473854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:03:55.397 [2024-12-09 11:14:56.473870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:03:55.397 [2024-12-09 11:14:56.473885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:03:55.397 [2024-12-09 11:14:56.482413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:55.397 [2024-12-09 11:14:56.487160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:03:55.397 11:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2552127 01:03:55.656 4441.50 IOPS, 17.35 MiB/s [2024-12-09T10:14:56.832Z] [2024-12-09 11:14:56.643868] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 01:03:57.528 5167.29 IOPS, 20.18 MiB/s [2024-12-09T10:14:59.641Z] 5748.00 IOPS, 22.45 MiB/s [2024-12-09T10:15:00.581Z] 6244.11 IOPS, 24.39 MiB/s [2024-12-09T10:15:01.522Z] 6636.70 IOPS, 25.92 MiB/s [2024-12-09T10:15:02.904Z] 7004.27 IOPS, 27.36 MiB/s [2024-12-09T10:15:03.848Z] 7264.83 IOPS, 28.38 MiB/s [2024-12-09T10:15:04.787Z] 7465.08 IOPS, 29.16 MiB/s [2024-12-09T10:15:05.726Z] 7670.21 IOPS, 29.96 MiB/s [2024-12-09T10:15:05.726Z] 7835.20 IOPS, 30.61 MiB/s 01:04:04.550 Latency(us) 01:04:04.550 [2024-12-09T10:15:05.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:04:04.551 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:04:04.551 Verification LBA range: start 0x0 length 0x4000 01:04:04.551 Nvme1n1 : 15.01 7836.73 30.61 7585.59 0.00 8269.46 619.74 16526.47 01:04:04.551 [2024-12-09T10:15:05.727Z] =================================================================================================================== 01:04:04.551 [2024-12-09T10:15:05.727Z] Total : 7836.73 30.61 7585.59 0.00 8269.46 619.74 16526.47 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:04:04.811 rmmod nvme_tcp 01:04:04.811 rmmod nvme_fabrics 01:04:04.811 rmmod nvme_keyring 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2552930 ']' 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2552930 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2552930 ']' 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2552930 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552930 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552930' 01:04:04.811 killing process with pid 2552930 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2552930 01:04:04.811 11:15:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2552930 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:05.070 11:15:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:04:07.612 01:04:07.612 real 0m27.540s 01:04:07.612 user 1m2.079s 01:04:07.612 sys 0m7.886s 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:04:07.612 ************************************ 01:04:07.612 END TEST nvmf_bdevperf 01:04:07.612 ************************************ 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:04:07.612 ************************************ 01:04:07.612 START TEST nvmf_target_disconnect 01:04:07.612 ************************************ 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 01:04:07.612 * Looking for test storage... 01:04:07.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:07.612 --rc genhtml_branch_coverage=1 01:04:07.612 --rc genhtml_function_coverage=1 01:04:07.612 --rc genhtml_legend=1 01:04:07.612 --rc geninfo_all_blocks=1 01:04:07.612 --rc geninfo_unexecuted_blocks=1 01:04:07.612 01:04:07.612 ' 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:07.612 --rc genhtml_branch_coverage=1 01:04:07.612 --rc genhtml_function_coverage=1 01:04:07.612 --rc genhtml_legend=1 01:04:07.612 --rc geninfo_all_blocks=1 01:04:07.612 --rc geninfo_unexecuted_blocks=1 01:04:07.612 01:04:07.612 ' 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:07.612 --rc genhtml_branch_coverage=1 01:04:07.612 --rc genhtml_function_coverage=1 01:04:07.612 --rc genhtml_legend=1 01:04:07.612 --rc geninfo_all_blocks=1 01:04:07.612 --rc geninfo_unexecuted_blocks=1 01:04:07.612 01:04:07.612 ' 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:07.612 --rc genhtml_branch_coverage=1 01:04:07.612 --rc genhtml_function_coverage=1 01:04:07.612 --rc genhtml_legend=1 01:04:07.612 --rc geninfo_all_blocks=1 01:04:07.612 --rc geninfo_unexecuted_blocks=1 01:04:07.612 01:04:07.612 ' 01:04:07.612 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:04:07.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 01:04:07.613 11:15:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:04:14.190 Found 0000:af:00.0 (0x8086 - 0x159b) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:04:14.190 Found 0000:af:00.1 (0x8086 - 0x159b) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 01:04:14.190 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:04:14.191 Found net devices under 0000:af:00.0: cvl_0_0 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:04:14.191 Found net devices under 0000:af:00.1: cvl_0_1 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:04:14.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:14.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 01:04:14.191 01:04:14.191 --- 10.0.0.2 ping statistics --- 01:04:14.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:14.191 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:04:14.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:14.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 01:04:14.191 01:04:14.191 --- 10.0.0.1 ping statistics --- 01:04:14.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:14.191 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:04:14.191 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 01:04:14.450 ************************************ 01:04:14.450 START TEST nvmf_target_disconnect_tc1 01:04:14.450 ************************************ 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 01:04:14.450 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:04:14.450 [2024-12-09 11:15:15.588415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:14.450 [2024-12-09 11:15:15.588479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187f3e0 with addr=10.0.0.2, port=4420 01:04:14.450 [2024-12-09 11:15:15.588516] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:04:14.450 [2024-12-09 11:15:15.588542] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:04:14.450 [2024-12-09 11:15:15.588556] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 01:04:14.450 spdk_nvme_probe() failed for transport address '10.0.0.2' 01:04:14.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 01:04:14.710 Initializing NVMe Controllers 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:04:14.710 01:04:14.710 real 0m0.234s 01:04:14.710 user 0m0.133s 01:04:14.710 sys 0m0.100s 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 01:04:14.710 ************************************ 01:04:14.710 END TEST nvmf_target_disconnect_tc1 01:04:14.710 ************************************ 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 01:04:14.710 ************************************ 01:04:14.710 START TEST nvmf_target_disconnect_tc2 01:04:14.710 ************************************ 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2557995 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2557995 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2557995 ']' 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:14.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:14.710 11:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:14.710 [2024-12-09 11:15:15.816911] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:04:14.710 [2024-12-09 11:15:15.816983] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:14.981 [2024-12-09 11:15:15.918904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:04:14.981 [2024-12-09 11:15:15.962781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:14.981 [2024-12-09 11:15:15.962825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:14.981 [2024-12-09 11:15:15.962835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:14.981 [2024-12-09 11:15:15.962844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:14.981 [2024-12-09 11:15:15.962852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:14.981 [2024-12-09 11:15:15.964302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:04:14.981 [2024-12-09 11:15:15.964404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:04:14.981 [2024-12-09 11:15:15.964505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:04:14.981 [2024-12-09 11:15:15.964506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 01:04:14.981 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:14.981 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:14.982 Malloc0 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:14.982 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:14.982 [2024-12-09 11:15:16.148879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:15.241 [2024-12-09 11:15:16.181122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2558019 01:04:15.241 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 01:04:15.242 11:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:04:17.165 11:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2557995 01:04:17.165 11:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 [2024-12-09 11:15:18.212219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Write completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.165 starting I/O failed 01:04:17.165 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 [2024-12-09 11:15:18.212505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 [2024-12-09 11:15:18.212759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Write completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 Read completed with error (sct=0, sc=8) 01:04:17.166 starting I/O failed 01:04:17.166 [2024-12-09 11:15:18.213016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 01:04:17.166 [2024-12-09 11:15:18.213153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.166 [2024-12-09 11:15:18.213183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.166 qpair failed and we were unable to recover it. 01:04:17.166 [2024-12-09 11:15:18.213366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.166 [2024-12-09 11:15:18.213391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.166 qpair failed and we were unable to recover it. 01:04:17.166 [2024-12-09 11:15:18.213513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.166 [2024-12-09 11:15:18.213527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.166 qpair failed and we were unable to recover it. 01:04:17.166 [2024-12-09 11:15:18.213639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.166 [2024-12-09 11:15:18.213658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.166 qpair failed and we were unable to recover it. 01:04:17.166 [2024-12-09 11:15:18.213760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.166 [2024-12-09 11:15:18.213773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.166 qpair failed and we were unable to recover it. 01:04:17.166 [2024-12-09 11:15:18.213933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.166 [2024-12-09 11:15:18.213947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.166 qpair failed and we were unable to recover it. 01:04:17.166 [2024-12-09 11:15:18.214100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.166 [2024-12-09 11:15:18.214114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.166 qpair failed and we were unable to recover it. 01:04:17.166 [2024-12-09 11:15:18.214270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.166 [2024-12-09 11:15:18.214284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.166 qpair failed and we were unable to recover it. 01:04:17.166 [2024-12-09 11:15:18.214379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.214396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.214497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.214510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.214587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.214600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.214714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.214729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.214808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.214821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.214932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.214954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.215044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.215068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.215198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.215224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.215366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.215379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.215483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.215495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.215584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.215597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.215682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.215695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.215850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.215863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.215937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.215950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.216057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.216070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.216142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.216154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.216227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.216240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.216329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.216342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.216425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.216438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.216506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.216518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.216667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.216680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.216756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.216769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.216863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.216876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.216968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.216981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.217061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.217073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.217154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.217166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.217322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.217335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.217473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.217488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.217574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.217587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.217686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.217700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.217786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.217800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.217891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.217905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.217983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.217997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.218073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.218087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.218222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.218236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.218317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.218331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.218439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.167 [2024-12-09 11:15:18.218452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.167 qpair failed and we were unable to recover it. 01:04:17.167 [2024-12-09 11:15:18.218535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.218549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.218692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.218707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.218791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.218805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.218940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.218953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.219109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.219122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.219196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.219208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.219285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.219298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.219383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.219395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.219469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.219482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.219597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.219610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.219700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.219714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.219789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.219802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.219883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.219896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.220966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.220978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.221045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.221057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.221128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.221140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.221223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.221236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.221391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.221404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.221502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.221514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.221581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.221594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.221681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.221697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.221856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.221870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.221953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.221966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.222050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.168 [2024-12-09 11:15:18.222063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.168 qpair failed and we were unable to recover it. 01:04:17.168 [2024-12-09 11:15:18.222208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.222222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.222300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.222314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.222413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.222427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.222507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.222520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.222601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.222614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.222759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.222772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.222863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.222876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.222965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.222979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.223056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.223069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.223212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.223226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.223304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.223318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.223396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.223409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.223484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.223497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.223572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.223585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.223737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.223751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.223835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.223849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.223924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.223938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.224033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.224045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.224121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.224133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.224222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.224235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.224379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.224391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.224464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.224476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.224549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.224561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.224653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.224666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.224753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.224766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.224850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.224862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.224945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.224957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.225087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.225100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.225170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.225182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.225273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.225286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.225371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.225384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.225468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.225480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.225621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.225633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.225736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.225750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.225828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.225841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.225934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.225947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.226103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.226120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.226202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.226215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.226293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.169 [2024-12-09 11:15:18.226307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.169 qpair failed and we were unable to recover it. 01:04:17.169 [2024-12-09 11:15:18.226378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.226391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.226470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.226483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.226559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.226573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.226664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.226678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.226754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.226768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.226965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.226977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.227116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.227128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.227198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.227210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.227303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.227315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.227411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.227423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.227515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.227527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.227599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.227611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.227698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.227710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.227794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.227806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.227948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.227960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.228097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.228109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.228199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.228212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.228348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.228362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.228441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.228454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.228533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.228546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.228618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.228632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.228702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.228714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.228790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.228803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.228916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.228930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.229012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.229026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.229159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.229172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.229256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.229269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.229351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.229363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.229443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.229456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.229520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.229532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.229606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.229619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.229696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.229711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.170 [2024-12-09 11:15:18.229787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.170 [2024-12-09 11:15:18.229800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.170 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.229866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.229878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.229951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.229964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.230095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.230108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.230193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.230206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.230282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.230297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.230378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.230391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.230542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.230555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.230626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.230639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.230740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.230753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.230885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.230899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.230983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.230996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.231075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.231088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.231158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.231172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.231253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.231266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.231338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.231350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.231496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.231509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.231596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.231610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.231695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.231709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.231795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.231808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.231884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.231897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.231979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.231993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.232063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.232075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.232151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.232164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.232256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.232270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.232350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.232363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.232454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.232467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.232553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.232566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.232651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.232666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.232741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.232753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.232825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.232837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.232917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.232931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.233069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.233082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.233158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.233172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.233245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.233258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.233338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.233351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.233436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.233450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.233529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.233542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.233618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.233632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.171 qpair failed and we were unable to recover it. 01:04:17.171 [2024-12-09 11:15:18.233780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.171 [2024-12-09 11:15:18.233793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.233876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.233890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.233993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.234006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.234075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.234088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.234223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.234237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.234329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.234342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.234422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.234437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.234516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.234531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.234617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.234630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.234792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.234810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.234890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.234904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.234977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.234990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.235142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.235156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.235289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.235302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.235389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.235402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.235482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.235495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.235578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.235592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.235698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.235743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.235899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.235942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.236092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.236135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.236281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.236294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.236373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.236387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.236490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.236533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.236680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.236724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.236880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.236923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.237073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.237086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.237172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.237185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.237322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.237335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.237495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.237540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.237749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.237794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.237934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.237977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.238120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.238134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.238253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.238266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.238355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.238368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.238450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.238463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.238637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.238662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.238736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.238748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.238846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.238859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.238933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.238946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.172 qpair failed and we were unable to recover it. 01:04:17.172 [2024-12-09 11:15:18.239084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.172 [2024-12-09 11:15:18.239098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.239178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.239191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.239275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.239288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.239359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.239372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.239461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.239474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.239562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.239575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.239662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.239675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.239753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.239769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.239857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.239870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.239954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.239967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.240035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.240048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.240208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.240251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.240413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.240459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.240604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.240658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.240802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.240844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.241015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.241058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.241197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.241239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.241431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.241445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.241523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.241536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.241622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.241634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.241715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.241729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.241807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.241821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.241955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.241968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.242074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.242087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.242160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.242173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.242257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.242269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.242344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.242357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.242430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.242443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.242514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.242527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.242661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.242677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.242773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.242786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.242861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.242874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.242946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.242960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.243045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.243058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.243141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.243154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.243227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.243240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.243401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.243445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.243596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.243639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.243846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.243891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.173 qpair failed and we were unable to recover it. 01:04:17.173 [2024-12-09 11:15:18.244015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.173 [2024-12-09 11:15:18.244028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.244122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.244136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.244217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.244230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.244383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.244396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.244473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.244486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.244573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.244586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.244728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.244742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.244821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.244834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.244921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.244937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.245071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.245085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.245161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.245174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.245253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.245266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.245349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.245362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.245436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.245449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.245520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.245533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.245626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.245639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.245753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.245766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.245879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.245892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.245964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.245977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.246053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.246066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.246143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.246156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.246227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.246240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.246323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.246336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.246405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.246418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.246503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.246516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.246588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.246601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.246674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.246688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.246822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.246835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.246967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.246981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.247064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.247077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.247273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.247286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.247379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.247393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.247479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.247492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.247580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.247594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.247677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.174 [2024-12-09 11:15:18.247691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.174 qpair failed and we were unable to recover it. 01:04:17.174 [2024-12-09 11:15:18.247765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.247779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.247860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.247873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.247960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.247973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.248054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.248067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.248137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.248173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.248314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.248357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.248496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.248539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.248691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.248737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.248939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.248982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.249142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.249186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.249384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.249396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.249467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.249479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.249613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.249628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.249719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.249735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.249812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.249824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.249898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.249911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.250040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.250054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.250121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.250134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.250217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.250230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.250367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.250380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.250462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.250476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.250611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.250624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.250707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.250720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.250797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.250811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.250895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.250908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.250978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.250991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.251061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.251075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.251158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.251171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.251256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.251269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.251362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.251376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.251444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.251458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.251597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.251611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.251697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.251712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.251791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.251804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.251893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.251907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.251981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.251994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.252078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.252090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.252165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.252178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.252327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.175 [2024-12-09 11:15:18.252340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.175 qpair failed and we were unable to recover it. 01:04:17.175 [2024-12-09 11:15:18.252431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.252445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.252533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.252548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.252618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.252631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.252724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.252738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.252874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.252887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.252962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.252974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.253940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.253953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.254102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.254115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.254184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.254198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.254333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.254347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.254427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.254440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.254577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.254590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.254666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.254680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.254766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.254779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.254867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.254881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.254951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.254964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.255105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.255119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.255199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.255212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.255285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.255298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.255376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.255390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.255524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.255537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.255617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.255631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.255728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.255746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.255820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.255848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.256014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.256060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.256210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.256254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.256403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.256415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.256492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.256505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.176 qpair failed and we were unable to recover it. 01:04:17.176 [2024-12-09 11:15:18.256580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.176 [2024-12-09 11:15:18.256593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.256677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.256691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.256797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.256810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.256900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.256914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.256994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.257007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.257081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.257094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.257245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.257258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.257344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.257357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.257447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.257460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.257541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.257554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.257640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.257659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.257734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.257747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.257838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.257851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.257991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.258082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.258168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.258332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.258429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.258532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.258626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.258724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.258807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.258897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.258983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.258996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.259068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.259081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.259149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.259163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.259247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.259260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.259350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.259363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.259499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.259513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.259590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.259602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.259694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.259708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.259816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.259829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.259899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.259912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.259988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.260001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.260137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.260151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.260227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.260240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.260311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.260324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.260394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.260407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.260486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.260500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.177 qpair failed and we were unable to recover it. 01:04:17.177 [2024-12-09 11:15:18.260648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.177 [2024-12-09 11:15:18.260662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.260792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.260805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.260888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.260900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.260976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.260989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.261072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.261085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.261214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.261227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.261315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.261328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.261408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.261421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.261500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.261513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.261588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.261602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.261706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.261720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.261862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.261876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.261975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.261988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.262147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.262190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.262347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.262390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.262541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.262589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.262752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.262797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.262965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.263029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.263173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.263216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.263429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.263443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.263531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.263544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.263636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.263653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.263733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.263747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.263833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.263846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.263938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.263952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.264036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.264049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.264121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.264135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.264208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.264222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.264302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.264316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.264399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.264413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.264487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.264500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.264663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.264677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.264764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.264777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.264876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.264889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.264964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.264978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.265052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.265066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.265172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.265187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.265259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.265272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.178 [2024-12-09 11:15:18.265363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.178 [2024-12-09 11:15:18.265377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.178 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.265465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.265479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.265558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.265571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.265654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.265669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.265750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.265763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.265836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.265850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.265932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.265947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.266023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.266038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.266115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.266128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.266205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.266218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.266286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.266299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.266375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.266388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.266475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.266490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.266583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.266598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.266690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.266704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.266853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.266866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.266945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.266958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.267028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.267041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.267132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.267144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.267238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.267254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.267395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.267408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.267475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.267488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.267563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.267576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.267661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.267674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.267754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.267766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.267840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.267853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.267944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.267957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.268091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.268104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.268277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.268323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.268485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.268549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.268717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.268763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.268919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.268963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.269118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.269163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.269346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.269369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.179 [2024-12-09 11:15:18.269459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.179 [2024-12-09 11:15:18.269474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.179 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.269611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.269624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.269703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.269717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.269853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.269867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.269940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.269953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.270025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.270038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.270122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.270135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.270202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.270216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.270286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.270299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.270435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.270449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.270586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.270599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.270680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.270693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.270766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.270781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.270862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.270875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.270953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.270966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.271937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.271950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.272033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.272113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.272209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.272292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.272379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.272484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.272642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.272744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.272837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.272926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.272996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.273009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.273091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.273104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.273197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.273211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.180 [2024-12-09 11:15:18.273281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.180 [2024-12-09 11:15:18.273294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.180 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.273369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.273382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.273452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.273466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.273606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.273619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.273688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.273702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.273785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.273798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.273867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.273880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.274984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.274998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.275162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.275205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.275356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.275402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.275545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.275588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.275771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.275814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.275968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.276010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.276145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.276188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.276398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.276440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.276591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.276633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.276788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.276831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.276987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.277000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.277134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.277147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.277221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.277234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.277328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.277342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.277420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.277463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.277624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.277680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.277897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.277941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.278070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.278084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.278164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.278177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.278263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.278277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.278367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.278380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.278476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.278489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.181 [2024-12-09 11:15:18.278630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.181 [2024-12-09 11:15:18.278648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.181 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.278793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.278808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.278884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.278897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.278978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.278992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.279090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.279103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.279248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.279262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.279354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.279367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.279448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.279461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.279603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.279618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.279711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.279725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.279806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.279819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.279891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.279919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.280070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.280114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.280266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.280310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.280475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.280501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.280587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.280602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.280741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.280756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.280845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.280859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.280954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.280968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.281051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.281064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.281146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.281158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.281234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.281249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.281337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.281352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.281429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.281442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.281517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.281554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.281712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.281756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.281925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.281981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.282221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.282236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.282316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.282330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.282415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.282428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.282563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.282576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.282666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.282680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.282764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.282777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.282866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.282878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.282946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.282959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.283051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.283063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.283143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.283155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.283225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.283237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.283313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.283325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.182 [2024-12-09 11:15:18.283475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.182 [2024-12-09 11:15:18.283488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.182 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.283554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.283567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.283650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.283664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.283735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.283748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.283843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.283855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.283926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.283939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.284076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.284089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.284198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.284211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.284292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.284304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.284438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.284451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.284520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.284533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.284614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.284628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.284785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.284798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.284879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.284893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.284971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.284984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.285051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.285065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.285138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.285151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.285288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.285300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.285368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.285381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.285452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.285465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.285541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.285556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.285630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.285642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.285781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.285794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.285881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.285893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.285959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.285971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.286062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.286154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.286242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.286332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.286433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.286521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.286604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.286707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.286794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.183 [2024-12-09 11:15:18.286880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.183 qpair failed and we were unable to recover it. 01:04:17.183 [2024-12-09 11:15:18.286959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.286971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.287043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.287056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.287142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.287155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.287239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.287252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.287321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.287333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.287412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.287425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.287508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.287521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.287656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.287669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.287755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.287768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.287856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.287869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.287949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.287962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.288050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.288062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.288133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.288145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.288232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.288244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.288330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.288342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.288418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.288432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.288569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.288582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.288664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.288678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.288809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.288828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.288902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.288915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.288980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.288995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.289951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.289964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.290039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.290052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.290132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.290145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.290230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.290244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.290337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.290350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.290424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.290437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.290516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.290528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.290595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.290608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.184 qpair failed and we were unable to recover it. 01:04:17.184 [2024-12-09 11:15:18.290684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.184 [2024-12-09 11:15:18.290697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.290773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.290786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.290886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.290899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.290973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.290985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.291056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.291068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.291146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.291159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.291232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.291245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.291320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.291332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.291470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.291484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.291574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.291587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.291666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.291679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.291755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.291769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.291837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.291849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.291921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.291934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.292020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.292033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.292120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.292135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.292223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.292237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.292372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.292384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.292456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.292469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.292541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.292554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.292636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.292652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.292785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.292800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.292879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.292892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.292993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.293082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.293180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.293274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.293359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.293446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.293531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.293614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.293706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.293792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.293880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.293892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.294028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.294040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.294176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.294189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.294339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.294352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.294421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.294433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.294502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.294515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.185 qpair failed and we were unable to recover it. 01:04:17.185 [2024-12-09 11:15:18.294586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.185 [2024-12-09 11:15:18.294598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.294696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.294710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.294778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.294791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.294867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.294879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.294958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.294971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.295104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.295117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.295190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.295203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.295272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.295284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.295362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.295374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.295447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.295459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.295526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.295539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.295674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.295687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.295777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.295789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.295861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.295874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.295950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.295963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.296069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.296160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.296243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.296332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.296416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.296564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.296651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.296740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.296823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.296916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.296998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.297012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.297095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.297107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.297180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.297192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.297262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.297275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.297419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.297432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.297522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.297535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.297616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.297629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.297706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.297719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.297792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.297804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.297876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.297889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.298028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.298040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.298134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.298148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.298224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.298237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.186 [2024-12-09 11:15:18.298325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.186 [2024-12-09 11:15:18.298338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.186 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.298427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.298441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.298525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.298539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.298613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.298627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.298718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.298733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.298812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.298826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.298922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.298934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.299010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.299022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.299167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.299179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.299259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.299271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.299377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.299389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.299463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.299475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.299550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.299562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.299636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.299655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.299765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.299777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.299850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.299862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.299939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.299951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.300100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.300113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.300191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.300204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.300282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.300296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.300394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.300407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.300492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.300505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.300578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.300592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.300674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.300689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.300762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.300778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.300855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.300868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.301007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.301020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.301094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.301108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.301193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.301207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.301288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.301301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.301374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.301388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.301463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.301476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.301616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.301629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.301712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.301728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.301808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.301823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.301964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.301978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.302052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.302066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.302139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.302153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.302261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.302275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.187 [2024-12-09 11:15:18.302351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.187 [2024-12-09 11:15:18.302365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.187 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.302505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.302519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.302604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.302618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.302699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.302713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.302786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.302800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.302883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.302897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.302973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.302987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.303078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.303091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.303173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.303186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.303320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.303333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.303404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.303417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.303488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.303500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.303660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.303675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.303751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.303766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.303838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.303851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.303916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.303928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.304013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.304027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.304106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.304119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.304206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.304220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.304296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.304309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.304381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.304394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.304468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.304482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.304619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.304632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.304773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.304788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.304862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.304876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.304949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.304966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.305043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.305057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.305153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.305167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.305379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.305394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.305550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.305564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.305651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.305665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.305754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.305769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.305846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.305860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.305946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.188 [2024-12-09 11:15:18.305960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.188 qpair failed and we were unable to recover it. 01:04:17.188 [2024-12-09 11:15:18.306038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.306052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.306191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.306205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.306339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.306354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.306429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.306450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.306523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.306537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.306632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.306651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.306729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.306743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.306890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.306904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.306977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.306991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.307066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.307080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.307151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.307165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.307257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.307272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.307346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.307360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.307515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.307529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.307616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.307630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.307709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.307723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.307798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.307812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.307892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.307905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.307978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.307992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.308077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.308180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.308266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.308370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.308461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.308563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.308652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.308738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.308841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.308925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.308994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.309007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.309078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.309091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.309166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.309181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.309262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.309275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.309346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.309358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.309496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.309509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.309576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.309588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.309666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.309680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.309754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.309767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.309840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.189 [2024-12-09 11:15:18.309853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.189 qpair failed and we were unable to recover it. 01:04:17.189 [2024-12-09 11:15:18.309937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.309950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.310052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.310139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.310223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.310305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.310388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.310538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.310619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.310702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.310785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.310865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.310997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.311010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.311154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.311167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.311256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.311270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.311361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.311375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.311458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.311472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.311547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.311561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.311706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.311721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.311813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.311827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.311975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.311989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.312079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.312093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.312166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.312180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.312255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.312269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.312341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.312355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.312433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.312448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.312532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.312547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.312624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.312638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.312858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.312873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.312944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.312958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.313093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.313107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.313176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.313189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.313264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.313278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.313347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.313364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.313458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.313472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.313553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.313566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.313639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.313661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.313753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.313768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.313856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.313869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.313945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.313959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.190 qpair failed and we were unable to recover it. 01:04:17.190 [2024-12-09 11:15:18.314049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.190 [2024-12-09 11:15:18.314062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.314194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.314207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.314284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.314298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.314382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.314396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.314535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.314549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.314687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.314702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.314775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.314790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.314873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.314887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.314977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.314991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.315064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.315078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.315150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.315165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.315301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.315316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.315388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.315402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.315481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.315495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.315653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.315668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.315753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.315767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.315863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.315877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.315945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.315959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.316031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.316045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.316190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.316205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.316315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.316331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.316412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.316428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.316506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.316520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.316593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.316606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.316699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.316713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.316786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.316801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.316900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.316913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.316988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.317000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.317080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.317093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.317249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.317293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.317441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.317484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.317626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.317679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.317827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.317870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.318013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.318064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.318336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.318377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.318512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.318525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.318687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.318731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.319011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.319054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.319261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.191 [2024-12-09 11:15:18.319304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.191 qpair failed and we were unable to recover it. 01:04:17.191 [2024-12-09 11:15:18.319510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.319551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.319739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.319784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.319936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.319978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.320197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.320239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.320388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.320402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.320539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.320552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.320641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.320658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.320866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.320915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.321087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.321102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.321192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.321206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.321309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.321324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.321477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.321492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.321579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.321594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.321742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.321756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.321836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.321851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.321933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.321951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.322041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.322067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.322152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.322165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.322251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.322265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.322367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.322410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.322563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.322609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.322788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.322836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.323039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.323085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.323255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.323301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.323453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.323468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.323559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.323573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.323669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.323682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.323762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.323776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.323860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.323874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.324011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.324024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.324161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.324175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.324252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.324265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.324360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.324373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.324529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.324572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.192 [2024-12-09 11:15:18.324733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.192 [2024-12-09 11:15:18.324776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.192 qpair failed and we were unable to recover it. 01:04:17.193 [2024-12-09 11:15:18.324923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.193 [2024-12-09 11:15:18.324967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.193 qpair failed and we were unable to recover it. 01:04:17.193 [2024-12-09 11:15:18.325115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.193 [2024-12-09 11:15:18.325129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.193 qpair failed and we were unable to recover it. 01:04:17.193 [2024-12-09 11:15:18.325216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.193 [2024-12-09 11:15:18.325232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.193 qpair failed and we were unable to recover it. 01:04:17.193 [2024-12-09 11:15:18.325426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.193 [2024-12-09 11:15:18.325440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.193 qpair failed and we were unable to recover it. 01:04:17.193 [2024-12-09 11:15:18.325532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.193 [2024-12-09 11:15:18.325546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.193 qpair failed and we were unable to recover it. 01:04:17.193 [2024-12-09 11:15:18.325625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.193 [2024-12-09 11:15:18.325639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.193 qpair failed and we were unable to recover it. 01:04:17.193 [2024-12-09 11:15:18.325723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.193 [2024-12-09 11:15:18.325736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.193 qpair failed and we were unable to recover it. 01:04:17.193 [2024-12-09 11:15:18.325818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.193 [2024-12-09 11:15:18.325831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.193 qpair failed and we were unable to recover it. 01:04:17.479 [2024-12-09 11:15:18.325925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.479 [2024-12-09 11:15:18.325965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.479 qpair failed and we were unable to recover it. 01:04:17.479 [2024-12-09 11:15:18.326195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.479 [2024-12-09 11:15:18.326242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.479 qpair failed and we were unable to recover it. 01:04:17.479 [2024-12-09 11:15:18.326456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.479 [2024-12-09 11:15:18.326502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.479 qpair failed and we were unable to recover it. 01:04:17.479 [2024-12-09 11:15:18.326633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.479 [2024-12-09 11:15:18.326671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.479 qpair failed and we were unable to recover it. 01:04:17.479 [2024-12-09 11:15:18.326755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.479 [2024-12-09 11:15:18.326769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.479 qpair failed and we were unable to recover it. 01:04:17.479 [2024-12-09 11:15:18.326857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.479 [2024-12-09 11:15:18.326870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.479 qpair failed and we were unable to recover it. 01:04:17.479 [2024-12-09 11:15:18.326953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.479 [2024-12-09 11:15:18.326965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.479 qpair failed and we were unable to recover it. 01:04:17.479 [2024-12-09 11:15:18.327044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.479 [2024-12-09 11:15:18.327057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.479 qpair failed and we were unable to recover it. 01:04:17.479 [2024-12-09 11:15:18.327137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.479 [2024-12-09 11:15:18.327150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.479 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.327233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.327246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.327324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.327337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.327494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.327536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.327682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.327728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.327925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.327968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.328122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.328167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.328319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.328362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.328575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.328618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.328772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.328816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.328969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.329018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.329175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.329188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.329255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.329268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.329410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.329453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.329612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.329665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.329819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.329862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.330019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.330062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.330229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.330273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.330432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.330479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.330565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.330578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.330713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.330727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.330873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.330886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.330967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.330979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.331131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.331145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.331221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.331234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.331310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.331323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.331398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.331411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.331574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.331617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.331768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.331811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.331966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.332008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.332150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.332164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.332308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.332322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.332408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.332421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.332507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.332520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.332602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.332615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.332757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.332770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.332845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.332858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.332939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.332952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.333098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.480 [2024-12-09 11:15:18.333112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.480 qpair failed and we were unable to recover it. 01:04:17.480 [2024-12-09 11:15:18.333194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.333208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.333282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.333295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.333414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.333456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.333594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.333637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.333796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.333839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.334058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.334100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.334254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.334299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.334509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.334522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.334615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.334628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.334723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.334737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.334872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.334886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.334973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.334989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.335074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.335087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.335238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.335282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.335487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.335528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.335677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.335722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.335929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.335971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.336114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.336127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.336205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.336218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.336314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.336341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.336509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.336560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.336732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.336779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.336931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.336974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.337110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.337149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.337250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.337263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.337413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.337428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.337523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.337536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.337694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.337708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.337806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.337818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.337892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.337905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.337984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.337996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.338071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.338083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.338156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.338169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.338242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.338254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.338398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.338411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.338482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.338494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.338568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.338581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.481 [2024-12-09 11:15:18.338667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.481 [2024-12-09 11:15:18.338681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.481 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.338749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.338762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.338847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.338860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.338945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.338958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.339044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.339056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.339133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.339145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.339216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.339229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.339311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.339323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.339454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.339467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.339598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.339611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.339699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.339711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.339806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.339818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.339889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.339902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.339976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.339989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.340970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.340983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.341113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.341126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.341259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.341272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.341342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.341355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.341433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.341447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.341525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.341537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.341617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.341629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.341720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.341734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.341881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.341894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.341967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.341980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.342117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.342129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.342206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.342220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.342369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.342382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.342464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.342477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.342554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.342566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.482 qpair failed and we were unable to recover it. 01:04:17.482 [2024-12-09 11:15:18.342647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.482 [2024-12-09 11:15:18.342659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.342741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.342754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.342835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.342848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.342917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.342929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.343008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.343020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.343092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.343105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.343177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.343189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.343266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.343278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.343437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.343450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.343596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.343609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.343750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.343763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.343865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.343878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.343950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.343962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.344053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.344066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.344141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.344154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.344227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.344241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.344318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.344331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.344424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.344437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.344526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.344538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.344638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.344662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.344757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.344770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.344849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.344861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.344929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.344941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.345045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.345136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.345224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.345370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.345475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.345567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.345660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.345748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.345838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.345925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.345997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.346009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.346092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.346104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.346236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.346250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.346320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.346332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.346399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.346412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.346483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.483 [2024-12-09 11:15:18.346495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.483 qpair failed and we were unable to recover it. 01:04:17.483 [2024-12-09 11:15:18.346567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.346580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.346664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.346676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.346820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.346840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.346905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.346917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.347022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.347034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.347115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.347127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.347207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.347220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.347284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.347296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.347389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.347401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.347480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.347493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.347576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.347588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.347661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.347674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.347818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.347831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.347923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.347935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.348052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.348065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.348149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.348161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.348236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.348251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.348318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.348331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.348401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.348413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.348482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.348495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.348575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.348588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.348660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.348672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.348820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.348833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.348907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.348920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.349004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.349016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.349088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.349101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.349171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.349185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.349268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.349280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.349356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.349369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.349511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.349523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.349658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.349671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.349748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.349760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.349841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.484 [2024-12-09 11:15:18.349853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.484 qpair failed and we were unable to recover it. 01:04:17.484 [2024-12-09 11:15:18.349922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.349934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.350986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.350998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.351067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.351078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.351153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.351166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.351359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.351371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.351468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.351480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.351558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.351570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.351650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.351662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.351745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.351757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.351850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.351862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.351944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.351956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.352035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.352047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.352120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.352132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.352217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.352232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.352307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.352319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.352406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.352418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.352491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.352503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.352642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.352658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.352738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.352751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.352845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.352857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.352948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.352961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.353031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.353043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.353126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.353138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.353207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.353221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.353308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.353322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.353406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.353418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.353502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.353514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.353586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.353599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.353674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.485 [2024-12-09 11:15:18.353688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.485 qpair failed and we were unable to recover it. 01:04:17.485 [2024-12-09 11:15:18.353782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.353794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.353872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.353884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.353954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.353966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.354050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.354142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.354234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.354322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.354407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.354492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.354586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.354678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.354771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.354863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.354992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.355006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.355134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.355148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.355237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.355250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.355316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.355330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.355402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.355415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.355486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.355500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.355635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.355655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.355742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.355756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.355851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.355864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.355932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.355944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.356025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.356038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.356110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.356125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.356205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.356218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.356292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.356305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.356382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.356395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.356492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.356505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.356640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.356660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.356728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.356741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.356886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.356899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.356970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.356983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.357079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.357092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.357161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.357175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.357257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.357271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.357350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.357363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.357480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.357522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.486 [2024-12-09 11:15:18.357704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.486 [2024-12-09 11:15:18.357750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.486 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.357897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.357941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.358077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.358119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.358318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.358361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.358527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.358571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.358734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.358748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.358825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.358838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.359033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.359046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.359119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.359133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.359210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.359224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.359302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.359315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.359397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.359409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.359485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.359498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.359576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.359590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.359733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.359747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.359882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.359895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.360047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.360060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.360228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.360271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.360486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.360529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.360676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.360720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.360871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.360914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.361125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.361166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.361238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.361251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.361343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.361356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.361506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.361549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.361697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.361741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.361902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.361950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.362111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.362159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.362303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.362316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.362385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.362398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.362533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.362546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.362613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.362626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.362717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.362730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.362813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.362826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.362957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.362971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.363058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.363071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.363273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.363316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.363471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.363514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.363672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.363720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.487 [2024-12-09 11:15:18.363932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.487 [2024-12-09 11:15:18.363975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.487 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.364135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.364177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.364379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.364422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.364625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.364677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.364831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.364872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.365030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.365072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.365247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.365262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.365405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.365443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.365610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.365670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.365831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.365874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.366021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.366064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.366207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.366250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.366454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.366496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.366584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.366598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.366752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.366767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.366858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.366871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.366941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.366954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.367043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.367055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.367153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.367166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.367306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.367319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.367450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.367464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.367536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.367548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.367641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.367659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.367798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.367811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.367887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.367899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.367969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.367982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.368053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.368065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.368138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.368153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.368245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.368258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.368338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.368351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.368432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.368444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.368523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.368536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.368682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.368695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.368766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.368778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.488 [2024-12-09 11:15:18.368865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.488 [2024-12-09 11:15:18.368877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.488 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.368961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.368973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.369111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.369125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.369200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.369213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.369349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.369362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.369449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.369462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.369597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.369610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.369721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.369734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.369827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.369839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.369908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.369920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.369993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.370005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.370086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.370098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.370180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.370192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.370327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.370340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.370436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.370448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.370531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.370543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.370616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.370629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.370709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.370722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.370802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.370814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.370883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.370895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.371016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.371103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.371193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.371284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.371374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.371461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.371563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.371659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.371810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.371914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.371991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.372005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.372089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.372101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.372182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.372194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.372269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.372284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.372362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.372374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.372461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.372474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.372554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.372566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.372638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.372655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.372726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.372738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.489 [2024-12-09 11:15:18.372824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.489 [2024-12-09 11:15:18.372836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.489 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.372971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.372984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.373070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.373083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.373155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.373168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.373254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.373267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.373339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.373352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.373440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.373452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.373586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.373599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.373686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.373699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.373780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.373792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.373902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.373915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.373997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.374986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.374998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.375132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.375144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.375273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.375286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.375361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.375374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.375459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.375471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.375606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.375619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.375695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.375708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.375780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.375793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.375859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.375871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.375958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.375971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.376043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.376056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.376125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.376137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.376208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.376224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.376294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.376308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.376381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.376394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.376476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.376490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.376596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.376609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.376753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.490 [2024-12-09 11:15:18.376767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.490 qpair failed and we were unable to recover it. 01:04:17.490 [2024-12-09 11:15:18.376838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.376850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.376922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.376934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.377010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.377022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.377092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.377105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.377186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.377199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.377334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.377348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.377478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.377491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.377561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.377573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.377659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.377672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.377753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.377766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.377837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.377849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.377931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.377944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.378014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.378027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.378100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.378113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.378187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.378199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.378272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.378285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.378426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.378439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.378518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.378531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.378602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.378614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.378707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.378719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.378834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.378847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.378988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.379001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.379140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.379152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.379225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.379238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.379305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.379317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.379464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.379477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.379566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.379579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.379672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.379685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.379749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.379761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.379860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.379873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.380025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.380038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.380116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.380128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.380203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.380215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.380305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.380317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.380401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.380418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.380505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.380517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.380597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.380610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.380697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.380710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.491 [2024-12-09 11:15:18.380789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.491 [2024-12-09 11:15:18.380802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.491 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.380870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.380882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.381963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.381976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.382960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.382974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.383970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.383983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.384060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.384072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.384153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.384165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.384239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.384252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.384337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.384349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.492 [2024-12-09 11:15:18.384420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.492 [2024-12-09 11:15:18.384433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.492 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.384506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.384519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.384598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.384610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.384695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.384708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.384777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.384789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.384854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.384866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.384936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.384949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.385038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.385125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.385213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.385297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.385380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.385478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.385655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.385748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.385833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.385922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.385995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.386007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.386142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.386155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.386287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.386301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.386374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.386386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.386458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.386471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.386568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.386582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.386655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.386667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.386744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.386759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.386831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.386845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.386915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.386928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.386996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.387009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.387080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.387094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.387166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.387179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.387248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.387261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.387354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.387368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.387447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.387459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.387609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.387622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.493 qpair failed and we were unable to recover it. 01:04:17.493 [2024-12-09 11:15:18.387768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.493 [2024-12-09 11:15:18.387781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.387864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.387877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.387959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.388012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.388223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.388265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.388410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.388452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.388648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.388662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.388810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.388824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.388919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.388933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.389143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.389185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.389339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.389381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.389526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.389567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.389729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.389744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.389820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.389833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.389987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.390030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.390230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.390272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.390410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.390453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.390565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.390578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.390656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.390671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.390748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.390761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.390831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.390844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.391017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.391032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.391186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.391199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.391297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.391310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.391410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.391423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.391502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.391515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.391593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.391606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.391750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.391795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.391946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.391989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.392212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.392259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.392373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.392386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.392457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.392472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.392581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.392624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.392847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.392890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.393025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.393068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.393218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.393263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.393402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.393443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.393656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.393700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.393902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.393944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.394146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.394187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.494 qpair failed and we were unable to recover it. 01:04:17.494 [2024-12-09 11:15:18.394337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.494 [2024-12-09 11:15:18.394350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.394576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.394618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.394835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.394878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.395019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.395062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.395226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.395269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.395418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.395460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.395655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.395669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.395756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.395770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.395908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.395921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.395992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.396076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.396157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.396246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.396335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.396423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.396512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.396594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.396740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.396850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.396945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.396958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.397101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.397114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.397188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.397201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.397281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.397294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.397379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.397392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.397465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.397478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.397561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.397574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.397660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.397673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.397807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.397820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.397900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.397913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.397983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.397996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.398078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.398091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.398164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.398178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.398313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.398327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.398406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.398420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.398492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.398505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.398593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.398606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.398687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.398700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.398777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.398789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.398857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.398871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.495 [2024-12-09 11:15:18.399027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.495 [2024-12-09 11:15:18.399041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.495 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.399191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.399204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.399275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.399288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.399359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.399373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.399445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.399458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.399526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.399540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.399607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.399620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.399695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.399709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.399793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.399806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.399878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.399892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.399961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.399974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.400046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.400059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.400146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.400159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.400238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.400251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.400324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.400337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.400426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.400455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.400618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.400673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.400813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.400856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.401005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.401047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.401252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.401295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.401549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.401562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.401636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.401663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.401808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.401821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.401899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.401912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.401982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.401995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.402073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.402086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.402159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.402172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.402259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.402272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.402420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.402433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.402511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.402525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.402605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.402618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.402766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.402779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.402862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.402877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.402946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.402958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.403041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.403055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.403139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.403152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.403222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.403234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.403373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.403386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.403454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.403466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.496 [2024-12-09 11:15:18.403538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.496 [2024-12-09 11:15:18.403551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.496 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.403686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.403700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.403784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.403797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.403863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.403876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.403950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.403963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.404094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.404107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.404177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.404190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.404268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.404281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.404423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.404437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.404576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.404590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.404663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.404677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.404764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.404777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.404851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.404864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.404936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.404949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.405018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.405032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.405176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.405189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.405260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.405273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.405339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.405352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.405423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.405437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.405521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.405534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.405691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.405705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.405786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.405800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.405931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.405944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.406078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.406091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.406233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.406276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.406413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.406456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.406608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.406659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.406802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.406815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.406898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.406911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.407128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.407171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.407370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.407413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.407622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.407636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.407721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.407735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.407809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.407824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.497 [2024-12-09 11:15:18.407909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.497 [2024-12-09 11:15:18.407923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.497 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.408057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.408070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.408232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.408274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.408486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.408528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.408690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.408734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.408823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.408836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.408912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.408925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.408994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.409007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.409089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.409102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.409321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.409363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.409563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.409605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.409763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.409818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.410015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.410060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.410332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.410375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.410519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.410557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.410658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.410672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.410868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.410881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.410970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.410983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.411182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.411225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.411360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.411402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.411547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.411590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.411733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.411749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.411836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.411850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.412046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.412059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.412211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.412255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.412455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.412497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.412710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.412748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.412959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.412972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.413171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.413184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.413281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.413295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.413434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.413447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.413534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.413547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.413700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.413714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.413851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.413865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.414035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.414079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.414289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.414336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.414553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.414604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.414699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.414712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.498 [2024-12-09 11:15:18.414847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.498 [2024-12-09 11:15:18.414860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.498 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.414933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.414948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.415083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.415096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.415195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.415208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.415438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.415481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.415629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.415684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.415912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.415957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.416124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.416168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.416403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.416463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.416629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.416693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.416844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.416858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.416998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.417011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.417101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.417114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.417203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.417216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.417289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.417302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.417413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.417425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.417556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.417568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.417642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.417660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.417742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.417755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.417820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.417833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.417965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.417977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.418054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.418067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.418139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.418151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.418300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.418313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.418453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.418465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.418534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.418547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.418666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.418679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.418750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.418762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.418845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.418859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.419012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.419025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.419117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.419129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.419267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.419280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.419358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.419370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.419442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.419455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.419527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.419540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.419623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.419635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.419713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.419726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.419795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.419808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.419878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.419890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.499 qpair failed and we were unable to recover it. 01:04:17.499 [2024-12-09 11:15:18.420021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.499 [2024-12-09 11:15:18.420034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.420108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.420120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.420197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.420212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.420286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.420299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.420405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.420418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.420502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.420514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.420601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.420614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.420691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.420704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.420777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.420789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.420866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.420878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.420951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.420964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.421050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.421062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.421133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.421146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.421296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.421308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.421402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.421414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.421497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.421510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.421585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.421598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.421730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.421744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.421817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.421830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.421906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.421919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.422058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.422072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.422146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.422158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.422245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.422257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.422342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.422354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.422444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.422457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.422531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.422543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.422659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.422673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.422830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.422842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.422977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.422990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.423066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.423080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.423162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.423174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.423246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.423258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.423339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.423351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.423440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.423453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.423520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.423533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.423606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.423618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.423703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.423716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.423856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.423868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.423934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.500 [2024-12-09 11:15:18.423947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.500 qpair failed and we were unable to recover it. 01:04:17.500 [2024-12-09 11:15:18.424085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.424098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.424177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.424190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.424324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.424337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.424474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.424489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.424559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.424572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.424671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.424684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.424762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.424775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.424926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.424939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.425020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.425033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.425103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.425116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.425203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.425217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.425305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.425319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.425392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.425404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.425550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.425563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.425639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.425658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.425735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.425747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.425829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.425841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.425930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.425944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.426108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.426120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.426189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.426201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.426281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.426293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.426380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.426393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.426462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.426474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.426603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.426616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.426751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.426764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.426863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.426876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.426943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.426955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.427037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.427050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.427133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.427146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.427291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.427304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.427415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.427445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.427544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.427558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.427679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.427693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.427808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.427820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.427897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.427909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.427977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.427989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.428082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.428095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.428168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.428180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.428322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.501 [2024-12-09 11:15:18.428335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.501 qpair failed and we were unable to recover it. 01:04:17.501 [2024-12-09 11:15:18.428402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.428414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.428495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.428507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.428581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.428594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.428677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.428690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.428823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.428838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.428919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.428931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.428997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.429009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.429092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.429105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.429196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.429209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.429277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.429290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.429371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.429383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.429511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.429523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.429605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.429617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.429713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.429725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.429818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.429831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.429919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.429932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.430089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.430102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.430236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.430250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.430325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.430339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.430495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.430508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.430592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.430605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.430696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.430710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.430785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.430798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.430938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.430951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.431080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.431093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.431175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.431188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.431388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.431401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.431476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.431490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.431631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.431650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.431734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.431747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.431941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.431955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.432118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.432141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.432221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.432236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.432372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.502 [2024-12-09 11:15:18.432387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.502 qpair failed and we were unable to recover it. 01:04:17.502 [2024-12-09 11:15:18.432507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.432522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.432595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.432607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.432687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.432701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.432789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.432803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.432904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.432918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.433050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.433063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.433150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.433163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.433307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.433320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.433399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.433412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.433522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.433535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.433621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.433636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.433723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.433736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.433875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.433889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.433965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.433979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.434055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.434068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.434165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.434178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.434277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.434290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.434422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.434435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.434514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.434527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.434672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.434686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.434772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.434786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.434868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.434881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.434955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.434967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.435100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.435114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.435208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.435221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.435291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.435303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.435376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.435390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.435488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.435501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.435571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.435583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.435664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.435677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.435829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.435842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.435990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.436004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.436080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.436093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.436176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.436189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.436323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.436336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.436406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.436419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.436502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.436516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.436607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.436628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.503 qpair failed and we were unable to recover it. 01:04:17.503 [2024-12-09 11:15:18.436723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.503 [2024-12-09 11:15:18.436738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.436816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.436830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.436909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.436923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.436995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.437008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.437089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.437103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.437175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.437188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.437271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.437285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.437428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.437441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.437573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.437586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.437668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.437682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.437757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.437770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.437903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.437916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.437992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.438005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.438078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.438092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.438226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.438240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.438324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.438338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.438499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.438542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.438695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.438741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.438970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.439017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.439242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.439286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.439490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.439532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.439758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.439772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.440000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.440043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.440256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.440298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.440498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.440541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.440765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.440809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.440987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.441032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.441231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.441273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.441557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.441600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.441796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.441811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.441909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.441923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.442072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.442085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.442287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.442328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.442479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.442521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.442754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.442809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.442888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.442902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.443044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.443057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.443144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.443157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.443361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.443375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.504 qpair failed and we were unable to recover it. 01:04:17.504 [2024-12-09 11:15:18.443518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.504 [2024-12-09 11:15:18.443533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.443656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.443699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.443920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.443963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.444127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.444171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.444408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.444453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.444668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.444716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.444904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.444918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.445061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.445104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.445247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.445289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.445514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.445557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.445717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.445731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.445876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.445890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.445979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.445992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.446081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.446094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.446232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.446245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.446380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.446393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.446524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.446538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.446612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.446625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.446720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.446733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.446887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.446900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.447032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.447045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.447144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.447186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.447410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.447455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.447616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.447676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.447950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.447965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.448192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.448206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.448298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.448331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.448496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.448539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.448751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.448795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.448936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.448949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.449088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.449102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.449175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.449188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.449292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.449305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.449386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.449399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.449495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.449507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.449594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.449608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.449695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.449708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.449790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.449803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.449872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.505 [2024-12-09 11:15:18.449886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.505 qpair failed and we were unable to recover it. 01:04:17.505 [2024-12-09 11:15:18.449970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.449983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.450050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.450066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.450180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.450193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.450284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.450297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.450386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.450399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.450624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.450675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.450832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.450873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.451086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.451127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.451362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.451408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.451561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.451607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.451808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.451823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.451914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.451927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.452129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.452142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.452275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.452288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.452423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.452436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.452517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.452531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.452611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.452624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.452707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.452721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.452919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.452933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.453067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.453080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.453162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.453175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.453263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.453276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.453358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.453371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.453453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.453466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.453683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.453730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.453938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.453984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.454188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.454231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.454372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.454415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.454623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.454680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.454831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.454876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.455108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.455151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.455411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.455454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.455643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.455660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.506 [2024-12-09 11:15:18.455820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.506 [2024-12-09 11:15:18.455863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.506 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.456097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.456144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.456359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.456402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.456623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.456642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.456872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.456886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.456984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.456997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.457094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.457108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.457333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.457377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.457593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.457641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.457933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.457948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.458161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.458204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.458364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.458407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.458633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.458675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.458815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.458829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.458914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.458927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.459084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.459097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.459228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.459241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.459341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.459354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.459447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.459461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.459658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.459672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.459755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.459769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.459857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.459871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.459958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.459971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.460061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.460074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.460186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.460199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.460296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.460310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.460388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.460401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.460538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.460551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.460633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.460652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.460791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.460805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.460956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.460984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.461192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.461234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.461378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.461421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.461625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.461679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.461905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.461951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.462108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.462154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.462300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.462345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.462488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.462529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.462736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.462779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.462940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.507 [2024-12-09 11:15:18.462981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.507 qpair failed and we were unable to recover it. 01:04:17.507 [2024-12-09 11:15:18.463158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.463203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.463364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.463410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.463600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.463657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.463822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.463867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.464034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.464076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.464236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.464280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.464492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.464537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.464803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.464849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.465011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.465032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.465158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.465173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.465273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.465288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.465381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.465394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.465538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.465553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.465638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.465664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.465761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.465775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.465909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.465922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.466010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.466025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.466152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.466166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.466261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.466275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.466364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.466378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.466541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.466555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.466693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.466707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.466790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.466804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.466871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.466886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.466962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.466975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.467048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.467062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.467147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.467160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.467296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.467310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.467383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.467397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.467472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.467485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.467563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.467577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.467719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.467734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.467807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.467822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.467904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.467917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.468018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.468032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.468165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.468181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.468275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.468288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.468379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.468392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.508 [2024-12-09 11:15:18.468476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.508 [2024-12-09 11:15:18.468490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.508 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.468592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.468606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.468682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.468695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.468831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.468845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.468918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.468931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.469017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.469031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.469119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.469133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.469296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.469309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.469397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.469415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.469502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.469520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.469607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.469622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.469791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.469806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.469954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.469968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.470042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.470054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.470127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.470140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.470254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.470269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.470352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.470366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.470439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.470454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.470537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.470551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.470640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.470658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.470730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.470743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.470879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.470893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.471003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.471017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.471087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.471100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.471228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.471241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.471322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.471335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.471411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.471424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.471513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.471527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.471727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.471741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.471936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.471951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.472029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.472042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.472131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.472145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.472222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.472235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.472308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.472322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.472455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.472469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.472539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.472552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.472638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.472657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.472729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.472744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.472832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.472847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.509 [2024-12-09 11:15:18.472941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.509 [2024-12-09 11:15:18.472954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.509 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.473035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.473049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.473184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.473197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.473279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.473292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.473425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.473438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.473582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.473596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.473680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.473692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.473845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.473859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.473938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.473952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.474027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.474040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.474122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.474136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.474215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.474228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.474300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.474313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.474380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.474393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.474539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.474552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.474700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.474715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.474796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.474810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.474881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.474894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.474981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.474995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.475074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.475086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.475165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.475178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.475250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.475262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.475334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.475347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.475425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.475438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.475510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.475524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.475594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.475606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.475688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.475704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.475796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.475809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.475882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.475895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.476036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.476050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.476132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.476146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.476218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.476231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.476323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.476337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.476402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.476415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.476546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.476559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.476632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.476648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.476722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.476735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.476872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.476886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.476972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.476988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.477080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.510 [2024-12-09 11:15:18.477094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.510 qpair failed and we were unable to recover it. 01:04:17.510 [2024-12-09 11:15:18.477243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.477257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.477328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.477342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.477431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.477449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.477535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.477549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.477622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.477635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.477739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.477753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.477827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.477841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.477921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.477935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.478078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.478091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.478167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.478180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.478258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.478271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.478341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.478354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.478435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.478450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.478527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.478541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.478683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.478698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.478775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.478789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.478880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.478893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.479032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.479046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.479184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.479199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.479282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.479295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.479418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.479476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.479628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.479683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.479898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.479944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.480040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.480054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.480199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.480212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.480309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.480323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.480459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.480473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.480676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.480690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.480826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.480840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.480922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.480936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.481019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.481033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.481120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.481134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.481279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.481322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.481576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.511 [2024-12-09 11:15:18.481619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.511 qpair failed and we were unable to recover it. 01:04:17.511 [2024-12-09 11:15:18.481837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.481851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.481927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.481940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.482028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.482041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.482129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.482143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.482216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.482231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.482370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.482383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.482457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.482470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.482565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.482579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.482662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.482676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.482752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.482766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.482910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.482926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.482997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.483010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.483081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.483096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.483190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.483203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.483282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.483296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.483371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.483386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.483469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.483483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.483612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.483625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.483722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.483736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.483869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.483882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.483966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.483979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.484065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.484078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.484173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.484188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.484321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.484334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.484408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.484422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.484493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.484508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.484591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.484605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.484694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.484708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.484842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.484856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.485056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.485069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.485145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.485159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.485268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.485282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.485438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.485451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.485598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.485612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.485767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.485781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.485937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.485951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.486022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.486036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.486126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.486139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.486214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.512 [2024-12-09 11:15:18.486228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.512 qpair failed and we were unable to recover it. 01:04:17.512 [2024-12-09 11:15:18.486320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.486334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.486474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.486489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.486565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.486579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.486672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.486687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.486756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.486770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.486841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.486857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.487002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.487018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.487204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.487218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.487312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.487325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.487460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.487474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.487614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.487628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.487718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.487732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.487811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.487824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.487909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.487922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.487999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.488012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.488077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.488091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.488258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.488272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.488378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.488391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.488532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.488545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.488618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.488632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.488764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.488778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.488850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.488863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.488956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.488972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.489053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.489067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.489153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.489169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.489253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.489266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.489352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.489365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.489429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.489443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.489525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.489539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.489677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.489691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.489766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.489779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.489918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.489932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.490031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.490045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.490257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.490270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.490343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.490357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.490440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.490454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.490526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.490539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.490618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.490632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.490726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.513 [2024-12-09 11:15:18.490740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.513 qpair failed and we were unable to recover it. 01:04:17.513 [2024-12-09 11:15:18.490821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.490835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.490976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.490992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.491075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.491088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.491158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.491172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.491314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.491327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.491412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.491425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.491500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.491516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.491606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.491621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.491761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.491774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.491870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.491883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.491997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.492040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.492243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.492286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.492507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.492551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.492641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.492661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.492802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.492815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.492965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.492978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.493113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.493126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.493208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.493222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.493294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.493308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.493395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.493409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.493480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.493495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.493593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.493670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.493903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.493953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.494175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.494221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.494372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.494417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.494637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.494697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.494818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.494831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.494974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.494987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.495068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.495083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.495171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.495185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.495260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.495273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.495346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.495360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.495433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.495447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.495584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.495599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.495750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.495766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.495848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.495862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.495934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.495948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.496093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.496107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.496195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.496210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.514 qpair failed and we were unable to recover it. 01:04:17.514 [2024-12-09 11:15:18.496289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.514 [2024-12-09 11:15:18.496303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.496399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.496413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.496566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.496579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.496672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.496687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.496775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.496792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.496933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.496947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.497030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.497044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.497133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.497151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.497225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.497239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.497315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.497329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.497405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.497420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.497571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.497586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.497670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.497684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.497777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.497792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.497866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.497880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.497956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.497973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.498056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.498069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.498144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.498157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.498250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.498264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.498344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.498359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.498436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.498450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.498531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.498544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.498628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.498641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.498741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.498754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.498831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.498846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.498917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.498930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.499003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.499017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.499160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.499173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.499245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.499258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.499342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.499357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.499425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.515 [2024-12-09 11:15:18.499438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.515 qpair failed and we were unable to recover it. 01:04:17.515 [2024-12-09 11:15:18.499505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.499518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.499591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.499605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.499686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.499701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.499780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.499796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.499884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.499897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.499977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.499991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.500066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.500080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.500166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.500179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.500250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.500263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.500396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.500411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.500511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.500525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.500609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.500623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.500700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.500714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.500782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.500796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.500868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.500882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.500952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.500966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.501059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.501073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.501144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.501157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.501247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.501261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.501341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.501354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.501487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.501500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.501575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.501588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.501724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.501738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.501809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.501822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.501894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.501908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.501984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.501997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.502196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.502210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.502279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.502292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.502369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.502383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.502452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.502465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.502539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.502554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.502631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.502649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.502722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.502736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.502816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.502830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.502908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.502921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.503014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.503027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.503158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.503173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.503247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.516 [2024-12-09 11:15:18.503260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.516 qpair failed and we were unable to recover it. 01:04:17.516 [2024-12-09 11:15:18.503423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.503466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.503603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.503657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.503795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.503838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.504001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.504017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.504106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.504122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.504215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.504228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.504305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.504319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.504451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.504465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.504542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.504556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.504737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.504752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.504835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.504850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.504994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.505032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.505173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.505217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.505356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.505399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.505618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.505673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.505891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.505936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.506140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.506186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.506402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.506444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.506598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.506614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.506689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.506703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.506843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.506856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.506955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.506968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.507069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.507083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.507162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.507175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.507266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.507281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.507368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.507381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.507519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.507533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.507616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.507630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.507716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.507731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.507807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.507821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.507908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.507922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.508062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.508076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.508151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.508165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.508242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.508255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.508331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.508345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.508496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.508540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.508676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.508721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.508868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.508911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.509140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.517 [2024-12-09 11:15:18.509154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.517 qpair failed and we were unable to recover it. 01:04:17.517 [2024-12-09 11:15:18.509234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.509247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.509406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.509421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.509571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.509585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.509669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.509684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.509832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.509847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.509927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.509942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.510090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.510105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.510263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.510308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.510462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.510505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.510665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.510710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.510969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.510983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.511121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.511135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.511272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.511328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.511545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.511590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.511766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.511811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.511911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.511924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.512010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.512023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.512160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.512173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.512252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.512265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.512401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.512417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.512504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.512517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.512655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.512669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.512754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.512767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.512834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.512846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.512937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.512952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.513025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.513039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.513112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.513124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.513258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.513272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.513418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.513433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.513531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.513544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.513688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.513702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.513785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.513797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.513864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.513878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.513953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.513967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.514118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.514131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.514290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.514333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.514481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.514524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.514683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.514729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.514951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.518 [2024-12-09 11:15:18.514965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.518 qpair failed and we were unable to recover it. 01:04:17.518 [2024-12-09 11:15:18.515120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.515163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.515320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.515363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.515510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.515554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.515761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.515776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.515932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.515976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.516178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.516222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.516437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.516480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.516660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.516706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.516976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.517019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.517123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.517136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.517276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.517289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.517374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.517388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.517582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.517639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.517840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.517887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.518110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.518153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.518357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.518400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.518559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.518603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.518887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.518944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.519111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.519167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.519350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.519395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.519617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.519673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.519840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.519883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.519991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.520005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.520140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.520154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.520293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.520306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.520386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.520399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.520476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.520489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.520623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.520637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.520772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.520785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.520882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.520895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.520987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.521001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.521091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.521104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.521187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.521201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.521286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.521300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.521401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.521415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.521489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.521503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.521576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.521590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.521662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.521675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.521812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.519 [2024-12-09 11:15:18.521826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.519 qpair failed and we were unable to recover it. 01:04:17.519 [2024-12-09 11:15:18.521900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.521913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.522009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.522028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.522112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.522126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.522266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.522279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.522363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.522376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.522463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.522477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.522589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.522603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.522680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.522694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.522829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.522845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.522923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.522937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.523022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.523036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.523120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.523134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.523207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.523220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.523293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.523307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.523379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.523392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.523527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.523541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.523614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.523628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.523734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.523748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.523884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.523899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.524079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.524092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.524172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.524186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.524321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.524334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.524407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.524420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.524499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.524512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.524595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.524608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.524711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.524725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.524796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.524810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.524957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.524971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.525068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.525081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.525159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.525172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.525247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.525260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.525341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.525355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.525489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.525504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.520 [2024-12-09 11:15:18.525574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.520 [2024-12-09 11:15:18.525588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.520 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.525661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.525675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.525748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.525762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.525857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.525871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.525960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.525974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.526110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.526127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.526214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.526228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.526309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.526324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.526404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.526418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.526525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.526539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.526621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.526634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.526773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.526787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.526863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.526877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.526965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.526979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.527051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.527065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.527210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.527226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.527311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.527325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.527403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.527417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.527496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.527510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.527588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.527602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.527686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.527701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.527773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.527786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.527865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.527878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.527950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.527964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.528033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.528047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.528117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.528130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.528208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.528222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.528292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.528306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.528394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.528408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.528500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.528514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.528590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.528603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.528697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.528711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.528845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.528859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.528934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.528947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.529019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.529033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.529119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.529133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.529225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.529238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.529313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.529327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.529406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.529420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.521 [2024-12-09 11:15:18.529493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.521 [2024-12-09 11:15:18.529507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.521 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.529619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.529632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.529835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.529850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.529928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.529942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.530095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.530111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.530199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.530214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.530352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.530365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.530510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.530565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.530752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.530797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.530937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.530980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.531182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.531195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.531331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.531345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.531497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.531510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.531671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.531716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.531854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.531898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.532045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.532088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.532288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.532338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.532555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.532599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.532834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.532885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.533146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.533161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.533236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.533250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.533429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.533443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.533531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.533545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.533621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.533635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.533792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.533807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.533971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.533985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.534071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.534084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.534158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.534172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.534247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.534260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.534341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.534355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.534444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.534459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.534559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.534573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.534653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.534670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.534807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.534822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.534967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.534980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.535119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.535132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.535216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.535230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.535363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.535377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.535471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.535485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.522 [2024-12-09 11:15:18.535569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.522 [2024-12-09 11:15:18.535583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.522 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.535722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.535736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.535821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.535835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.535906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.535920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.536133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.536148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.536233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.536247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.536343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.536357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.536427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.536441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.536530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.536544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.536615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.536629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.536715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.536737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.536896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.536911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.537025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.537040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.537197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.537212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.537347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.537401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.537560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.537603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.537793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.537839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.538014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.538039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.538144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.538160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.538244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.538258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.538342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.538356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.538501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.538545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.538746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.538792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.538959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.538999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.539071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.539084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.539171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.539184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.539337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.539351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.539436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.539449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.539579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.539630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.539800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.539845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.539998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.540044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.540217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.540232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.540320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.540335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.540485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.540498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.540573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.540586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.540724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.540738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.540836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.540850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.540934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.540947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.541087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.541101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.541239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.541253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.541338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.541352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.541458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.541472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.541546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.523 [2024-12-09 11:15:18.541559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.523 qpair failed and we were unable to recover it. 01:04:17.523 [2024-12-09 11:15:18.541627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.541641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.541751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.541773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.541925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.541946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.542045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.542095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.542241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.542287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.542464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.542510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.542696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.542710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.542804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.542817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.542884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.542896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.542975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.542989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.543075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.543088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.543158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.543172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.543333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.543347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.543434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.543447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.543528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.543543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.543625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.543639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.543727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.543741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.543812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.543825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.543905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.543918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.544007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.544020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.544107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.544121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.544204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.544218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.544356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.544370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.544454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.544468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.544554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.544568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.544725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.544739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.544834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.544847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.544911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.544923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.545007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.545021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.545100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.545114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.545201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.545214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.545293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.545307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.545387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.545400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.545474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.545488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.545563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.545577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.524 [2024-12-09 11:15:18.545677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.524 [2024-12-09 11:15:18.545691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.524 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.545769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.545782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.545849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.545862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.545932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.545945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.546029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.546043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.546121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.546135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.546218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.546233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.546364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.546379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.546452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.546465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.546611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.546625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.546710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.546724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.546828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.546842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.546930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.546944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.547083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.547097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.547171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.547184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.547272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.547286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.547351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.547365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.547447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.547460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.547536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.547550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.547619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.547635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.547722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.547737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.547871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.547885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.547956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.547969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.548044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.548058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.548130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.548144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.548220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.548233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.548309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.548322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.525 [2024-12-09 11:15:18.548522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.525 [2024-12-09 11:15:18.548535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.525 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.548629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.548642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.548717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.548730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.548801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.548815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.548893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.548907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.548988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.549003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.549086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.549100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.549189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.549204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.549284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.549297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.549366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.549380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.549469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.549484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.549675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.549689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.549756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.549768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.549846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.549858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.549936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.549949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.550031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.550044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.550183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.550196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.550269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.550282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.550360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.550375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.550451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.550465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.550604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.550617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.550694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.550707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.550785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.550798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.550864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.550877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.550950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.550963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.551974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.551987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.552053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.552066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.552147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.552160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.552235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.552249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.526 qpair failed and we were unable to recover it. 01:04:17.526 [2024-12-09 11:15:18.552322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.526 [2024-12-09 11:15:18.552337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.552404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.552417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.552496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.552510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.552588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.552602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.552684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.552698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.552835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.552847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.552923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.552936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.553014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.553027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.553112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.553125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.553209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.553222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.553296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.553308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.553378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.553391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.553471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.553483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.553557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.553569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.553633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.553652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.553836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.553849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.553923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.553936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.554973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.554986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.555061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.555075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.555153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.555167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.555236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.555248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.555387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.555401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.555476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.555489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.555624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.555639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.555719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.555732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.555866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.555879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.555953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.555965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.556046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.556059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.556204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.527 [2024-12-09 11:15:18.556217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.527 qpair failed and we were unable to recover it. 01:04:17.527 [2024-12-09 11:15:18.556288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.556301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.556378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.556391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.556474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.556487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.556565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.556578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.556657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.556671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.556741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.556753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.556840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.556854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.556924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.556937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.557984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.557996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.558068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.558081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.558160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.558173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.558313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.558328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.558414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.558428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.558504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.558516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.558584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.558597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.558671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.558684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.558758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.558771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.558841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.558854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.558932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.558945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.559029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.559042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.559114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.559127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.559199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.559211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.559281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.559294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.559362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.559375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.559462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.559475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.559546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.559559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.559711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.559727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.559802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.559816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.559890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.528 [2024-12-09 11:15:18.559903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.528 qpair failed and we were unable to recover it. 01:04:17.528 [2024-12-09 11:15:18.560055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.560069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.560161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.560174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.560246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.560258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.560329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.560342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.560412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.560425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.560508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.560521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.560613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.560626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.560707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.560721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.560804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.560817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.560893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.560906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.561040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.561053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.561121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.561134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.561205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.561217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.561286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.561299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.561370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.561383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.561454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.561467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.561538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.561551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.561623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.561635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.561791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.561804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.561876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.561889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.562040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.562129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.562211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.562305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.562399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.562485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.562634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.562736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.562818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.562905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.562999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.563012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.563096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.563109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.529 qpair failed and we were unable to recover it. 01:04:17.529 [2024-12-09 11:15:18.563182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.529 [2024-12-09 11:15:18.563194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.563269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.563281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.563414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.563427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.563509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.563522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.563637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.563655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.563726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.563739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.563872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.563884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.563960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.563972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.564081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.564168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.564284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.564374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.564457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.564547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.564636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.564740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.564826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.564920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.564989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.565080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.565177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.565263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.565357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.565445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.565543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.565631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.565783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.565861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.565945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.565958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.566032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.566045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.566233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.566247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.566327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.566340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.566425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.566437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.566517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.566530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.566599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.566612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.566762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.566776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.566908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.566921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.567005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.567018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.567167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.567180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.530 qpair failed and we were unable to recover it. 01:04:17.530 [2024-12-09 11:15:18.567379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.530 [2024-12-09 11:15:18.567392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.567462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.567474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.567552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.567566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.567643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.567664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.567743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.567758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.567828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.567843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.567954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.567968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.568037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.568056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.568144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.568156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.568241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.568254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.568338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.568352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.568428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.568442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.568597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.568610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.568691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.568705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.568845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.568859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.568995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.569090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.569177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.569267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.569353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.569446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.569533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.569619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.569709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.569795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.569876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.569889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.570023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.570037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.570176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.570190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.570277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.570290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.570376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.570388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.570460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.570473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.570546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.570564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.570721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.570734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.570807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.570819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.571010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.571023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.571159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.571172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.571265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.571277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.571346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.571359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.571434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.571447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.531 [2024-12-09 11:15:18.571528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.531 [2024-12-09 11:15:18.571542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.531 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.571622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.571634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.571710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.571723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.571809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.571822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.571957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.571970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.572046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.572058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.572149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.572162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.572241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.572254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.572338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.572352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.572424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.572437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.572521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.572534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.572604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.572617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.572701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.572714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.572799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.572812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.572945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.572958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.573961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.573974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.574040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.574052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.574125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.574138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.574214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.574229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.574305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.574318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.574399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.574412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.574481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.574494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.574575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.574591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.574669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.574684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.574819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.574834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.574910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.574925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.575013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.575026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.575158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.575171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.575243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.575256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.532 [2024-12-09 11:15:18.575344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.532 [2024-12-09 11:15:18.575356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.532 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.575496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.575510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.575586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.575598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.575685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.575699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.575798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.575811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.575882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.575895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.575975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.575988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.576948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.576961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.577036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.577050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.577187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.577199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.577267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.577280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.577414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.577427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.577510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.577523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.577659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.577672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.577745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.577757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.577842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.577856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.577932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.577945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.578058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.578070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.578145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.578157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.578224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.578237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.578302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.578315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.578449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.578462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.578532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.578545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.578612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.578627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.533 [2024-12-09 11:15:18.578765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.533 [2024-12-09 11:15:18.578778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.533 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.578866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.578878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.578952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.578964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.579968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.579982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.580121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.580135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.580220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.580234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.580308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.580322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.580460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.580474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.580545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.580559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.580637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.580653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.580727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.580740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.580877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.580892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.581025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.581039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.581111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.581124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.581214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.581228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.581311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.581324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.581417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.581432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.581578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.581592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.581664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.581679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.581749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.581763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.581843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.581857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.581938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.581952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.582028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.582042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.582118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.582135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.582218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.582231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.582315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.582328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.582411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.582425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.582500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.582514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.582587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.534 [2024-12-09 11:15:18.582601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.534 qpair failed and we were unable to recover it. 01:04:17.534 [2024-12-09 11:15:18.582679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.582696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.582767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.582780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.582850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.582864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.582948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.582962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.583045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.583060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.583127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.583142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.583212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.583225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.583419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.583432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.583505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.583518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.583653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.583668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.583740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.583754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.583899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.583916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.583994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.584008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.584090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.584104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.584176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.584190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.584269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.584283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.584356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.584369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.584503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.584517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.584604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.584618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.584705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.584719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.584871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.584884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.584970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.584984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.585116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.585129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.585220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.585233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.585322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.585337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.585421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.585435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.585508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.585521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.585597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.585611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.585685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.585700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.585789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.585804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.585879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.585893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.585964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.585977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.586052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.586066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.586215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.586229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.586330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.586344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.586433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.586447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.586525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.586541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.586625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.586639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.586721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.535 [2024-12-09 11:15:18.586735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.535 qpair failed and we were unable to recover it. 01:04:17.535 [2024-12-09 11:15:18.586811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.586824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.586900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.586917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.587052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.587066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.587162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.587176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.587318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.587332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.587401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.587415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.587487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.587501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.587570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.587583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.587752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.587766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.587851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.587865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.587996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.588011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.588081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.588095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.588232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.588246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.588330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.588343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.588483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.588496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.588652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.588666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.588744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.588757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.588844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.588858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.588987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.589001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.589138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.589152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.589242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.589255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.589345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.589359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.589473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.589487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.589564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.589578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.589656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.589671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.589750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.589763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.589922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.589936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.590018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.590032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.590114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.590133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.590280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.590296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.590367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.590380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.590449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.590463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.590541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.590555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.590666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.590680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.590876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.590890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.590977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.590990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.591082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.591096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.591297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.591310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.591392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.536 [2024-12-09 11:15:18.591406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.536 qpair failed and we were unable to recover it. 01:04:17.536 [2024-12-09 11:15:18.591478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.591492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.591594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.591608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.591681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.591697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.591789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.591803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.591938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.591951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.592042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.592056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.592136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.592150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.592224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.592238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.592323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.592336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.592480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.592495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.592592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.592611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.592711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.592729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.592808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.592828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.592911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.592927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.592998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.593014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.593108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.593125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.593221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.593238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.593398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.593413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.593493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.593508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.593661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.593675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.593756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.593769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.593848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.593862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.593948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.593962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.594035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.594049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.594206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.594220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.594366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.594379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.594451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.594464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.594548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.594562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.594634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.594655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.594745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.594764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.594907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.594922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.595004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.595017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.595092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.595105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.595197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.595211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.595285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.595299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.595371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.595385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.595459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.595472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.595608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.595622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.595706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.537 [2024-12-09 11:15:18.595720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.537 qpair failed and we were unable to recover it. 01:04:17.537 [2024-12-09 11:15:18.595793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.595807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.595881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.595895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.595984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.595998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.596071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.596086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.596231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.596245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.596314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.596327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.596408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.596421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.596496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.596509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.596582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.596596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.596731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.596745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.596823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.596837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.596915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.596929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.597047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.597060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.597129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.597142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.597340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.597354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.597444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.597457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.597605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.597620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.597781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.597826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.597969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.598012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.598177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.598221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.598353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.598366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.598436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.598450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.598585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.598599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.598737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.598751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.598861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.598874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.599010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.599024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.599099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.599113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.599195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.599208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.599293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.599307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.599389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.599403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.599493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.599508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.599670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.599689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.599771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.599788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.599868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.538 [2024-12-09 11:15:18.599889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.538 qpair failed and we were unable to recover it. 01:04:17.538 [2024-12-09 11:15:18.600041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.600058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.600154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.600171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.600247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.600266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.600350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.600366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.600585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.600641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.600884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.600930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.601090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.601124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.601196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.601210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.601353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.601367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.601489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.601504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.601635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.601689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.601857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.601900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.602053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.602099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.602244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.602257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.602399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.602413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.602549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.602562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.602694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.602708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.602778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.602792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.602852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.602866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.602943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.602956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.603154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.603169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.603256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.603269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.603327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.603340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.603427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.603440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.603510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.603523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.603610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.603623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.603743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.603757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.603908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.603921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.604028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.604134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.604239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.604326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.604426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.604517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.604603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.604703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.604794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.604892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.604984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.539 [2024-12-09 11:15:18.605003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.539 qpair failed and we were unable to recover it. 01:04:17.539 [2024-12-09 11:15:18.605082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.605096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.605180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.605194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.605265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.605281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.605419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.605442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.605527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.605543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.605628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.605655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.605747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.605809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.606032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6063f0 is same with the state(6) to be set 01:04:17.540 [2024-12-09 11:15:18.606300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.606351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.606518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.606565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.606744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.606790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.607006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.607049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.607282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.607325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.607544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.607586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.607771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.607785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.607862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.607875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.608021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.608034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.608125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.608139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.608214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.608228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.608309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.608322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.608391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.608404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.608487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.608500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.608638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.608657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.608754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.608768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.608919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.608935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.609029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.609042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.609121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.609135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.609275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.609289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.609368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.609382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.609452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.609466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.609552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.609565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.609651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.609666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.609744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.609758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.609847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.609860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.609931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.609944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.610058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.610072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.610207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.610221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.610294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.610309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.610393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.610407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.540 qpair failed and we were unable to recover it. 01:04:17.540 [2024-12-09 11:15:18.610478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.540 [2024-12-09 11:15:18.610492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.610570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.610583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.610666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.610681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.610879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.610893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.610985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.610998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.611069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.611082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.611216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.611230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.611364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.611378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.611569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.611586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.611722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.611737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.611882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.611897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.611983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.611996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.612079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.612092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.612187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.612200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.612283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.612297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.612366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.612380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.612453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.612466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.612542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.612557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.612650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.612664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.612742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.612757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.612901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.612915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.612993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.613007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.613086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.613099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.613169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.613182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.613260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.613273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.613420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.613436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.613508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.613522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.613593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.613606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.613804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.613819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.613894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.613907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.613979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.613993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.614089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.614103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.614238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.614251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.614334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.614348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.614488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.614507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.614590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.614605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.614731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.614745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.614827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.614840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.614917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.614931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.541 qpair failed and we were unable to recover it. 01:04:17.541 [2024-12-09 11:15:18.615002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.541 [2024-12-09 11:15:18.615016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.615160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.615173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.615319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.615332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.615397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.615410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.615489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.615505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.615590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.615605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.615757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.615772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.615850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.615864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.615934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.615948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.616087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.616101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.616176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.616190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.616264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.616278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.616415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.616429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.616512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.616527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.616604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.616619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.616762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.616775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.616846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.616859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.616948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.616962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.617030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.617044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.617120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.617133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.617221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.617237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.617318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.617332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.617469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.617484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.617574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.617587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.617661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.617675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.617761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.617774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.617851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.617867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.617937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.617951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.618023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.618037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.618103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.618117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.618199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.618213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.618297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.618311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.618450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.618463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.618542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.618556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.618623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.542 [2024-12-09 11:15:18.618637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.542 qpair failed and we were unable to recover it. 01:04:17.542 [2024-12-09 11:15:18.618735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.618750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.618886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.618900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.618980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.618994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.619100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.619114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.619200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.619215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.619291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.619304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.619377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.619390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.619463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.619477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.619610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.619624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.619769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.619784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.619861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.619874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.620004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.620018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.620094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.620108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.620185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.620199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.620268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.620281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.620431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.620445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.620526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.620539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.620620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.620636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.620729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.620743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.620884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.620897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.620968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.620983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.621055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.621069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.621156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.621170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.621322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.621336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.621416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.621430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.621584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.621597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.621697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.621712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.621816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.621829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.621904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.621918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.621996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.622010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.622078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.622092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.622166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.622182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.622270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.622283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.622472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.622488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.622623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.622638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.622731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.622746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.622848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.622864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.622951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.622965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.623047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.543 [2024-12-09 11:15:18.623061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.543 qpair failed and we were unable to recover it. 01:04:17.543 [2024-12-09 11:15:18.623198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.623214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.623352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.623366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.623499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.623514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.623614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.623629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.623722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.623736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.623838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.623851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.623937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.623950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.624094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.624109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.624198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.624213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.624315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.624331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.624414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.624428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.624572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.624586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.624737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.624751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.624838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.624851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.624925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.624939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.625028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.625043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.625117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.625130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.625205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.625218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.625309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.625322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.625426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.625451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.625534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.625551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.625649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.625665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.625739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.625753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.625822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.625836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.625914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.625928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.626004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.626022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.626117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.626130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.626267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.626281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.626360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.626375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.626467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.626481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.626554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.626567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.626704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.626718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.626796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.626811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.626886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.626900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.626969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.626983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.627062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.627075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.627160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.627175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.627253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.627268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.627345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.544 [2024-12-09 11:15:18.627359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.544 qpair failed and we were unable to recover it. 01:04:17.544 [2024-12-09 11:15:18.627441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.545 [2024-12-09 11:15:18.627460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.545 qpair failed and we were unable to recover it. 01:04:17.545 [2024-12-09 11:15:18.627598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.545 [2024-12-09 11:15:18.627613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.545 qpair failed and we were unable to recover it. 01:04:17.545 [2024-12-09 11:15:18.627720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.545 [2024-12-09 11:15:18.627735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.545 qpair failed and we were unable to recover it. 01:04:17.545 [2024-12-09 11:15:18.627807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.545 [2024-12-09 11:15:18.627821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.545 qpair failed and we were unable to recover it. 01:04:17.545 [2024-12-09 11:15:18.627934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.545 [2024-12-09 11:15:18.627948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.545 qpair failed and we were unable to recover it. 01:04:17.545 [2024-12-09 11:15:18.628040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.545 [2024-12-09 11:15:18.628054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.545 qpair failed and we were unable to recover it. 01:04:17.842 [2024-12-09 11:15:18.628199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.842 [2024-12-09 11:15:18.628214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.842 qpair failed and we were unable to recover it. 01:04:17.842 [2024-12-09 11:15:18.628296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.842 [2024-12-09 11:15:18.628311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.842 qpair failed and we were unable to recover it. 01:04:17.842 [2024-12-09 11:15:18.628379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.842 [2024-12-09 11:15:18.628393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.842 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.628463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.628478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.628567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.628583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.628692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.628708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.628806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.628821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.628953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.628967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.629084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.629098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.629236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.629251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.629337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.629352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.629439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.629452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.629598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.629612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.629682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.629696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.629811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.629824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.629908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.629921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.630004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.630018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.630093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.630106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.630181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.630195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.630393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.630407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.630496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.630509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.630603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.630617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.630693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.630706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.630775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.630789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.630864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.630878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.631014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.631027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.631241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.631254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.631352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.631368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.631440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.631454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.631550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.631563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.631651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.631665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.631745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.631758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.631834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.631848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.631926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.631939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.632023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.632037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.632122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.632136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.632323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.632339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.632505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.632522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.632600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.632615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.632764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.632779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.632869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.843 [2024-12-09 11:15:18.632883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.843 qpair failed and we were unable to recover it. 01:04:17.843 [2024-12-09 11:15:18.633023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.633037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.633127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.633142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.633255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.633272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.633362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.633376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.633518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.633532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.633612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.633626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.633739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.633754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.633840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.633855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.633947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.633962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.634038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.634052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.634130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.634145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.634232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.634247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.634388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.634403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.634510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.634525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.634629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.634650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.634726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.634741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.634818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.634833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.634970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.634984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.635058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.635073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.635172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.635187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.635284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.635300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.635455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.635478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.635574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.635590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.635742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.635759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.635842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.635857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.635946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.635960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.636030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.636045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.636131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.636147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.636226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.636240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.636315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.636330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.636410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.636425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.636501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.636516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.636596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.636612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.636715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.636731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.636805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.636820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.636958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.636973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.637052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.637067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.844 qpair failed and we were unable to recover it. 01:04:17.844 [2024-12-09 11:15:18.637142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.844 [2024-12-09 11:15:18.637157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.637303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.637318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.637454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.637468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.637560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.637575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.637656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.637671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.637758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.637773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.637928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.637950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.638038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.638054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.638137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.638152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.638227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.638241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.638367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.638381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.638512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.638525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.638599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.638613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.638777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.638791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.638866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.638880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.638970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.638984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.639062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.639077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.639141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.639155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.639290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.639303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.639424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.639438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.639573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.639587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.639657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.639671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.639805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.639819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.639886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.639899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.640039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.640053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.640125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.640139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.640237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.640250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.640348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.640363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.640441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.640455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.640553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.640567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.640676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.640693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.640841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.640855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.640942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.640955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.641036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.641050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.641126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.641139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.641273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.641286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.641370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.641384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.641517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.641531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.641604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.641618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.845 qpair failed and we were unable to recover it. 01:04:17.845 [2024-12-09 11:15:18.641734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.845 [2024-12-09 11:15:18.641748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.641885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.641900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.641975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.641989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.642079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.642092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.642168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.642181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.642263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.642277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.642347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.642361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.642453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.642467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.642603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.642616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.642700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.642714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.642844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.642857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.642937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.642951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.643057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.643070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.643150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.643164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.643244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.643258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.643394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.643408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.643475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.643489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.643624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.643640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.643720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.643734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.643867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.643882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.643956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.643970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.644058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.644071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.644162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.644176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.644249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.644263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.644354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.644367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.644437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.644450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.644546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.644562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.644632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.644652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.644737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.644751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.644827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.644840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.644992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.645005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.645135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.645149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.645232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.645245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.645338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.645353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.645503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.645517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.645593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.645607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.645690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.645704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.645859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.645873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.645946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.846 [2024-12-09 11:15:18.645959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.846 qpair failed and we were unable to recover it. 01:04:17.846 [2024-12-09 11:15:18.646032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.646046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.646123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.646136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.646289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.646302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.646394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.646413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.646484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.646498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.646572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.646587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.646677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.646692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.646821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.646835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.646914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.646929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.647029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.647042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.647126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.647140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.647233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.647251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.647329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.647344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.647420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.647434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.647507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.647520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.647605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.647619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.647761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.647776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.647848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.647862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.648008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.648024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.648094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.648108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.648240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.648253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.648333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.648346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.648419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.648432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.648578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.648599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.648690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.648704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.648775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.648789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.648856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.648870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.649013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.649027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.649099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.649112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.649191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.649204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.649340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.649353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.649458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.649472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.649571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.649585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.847 qpair failed and we were unable to recover it. 01:04:17.847 [2024-12-09 11:15:18.649664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.847 [2024-12-09 11:15:18.649678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.649765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.649778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.649927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.649940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.650074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.650087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.650168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.650181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.650265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.650279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.650487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.650503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.650599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.650614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.650685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.650700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.650771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.650785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.650869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.650884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.651044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.651058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.651132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.651146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.651246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.651261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.651339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.651353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.651432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.651447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.651520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.651533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.651618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.651631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.651774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.651788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.651869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.651882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.651961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.651974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.652043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.652058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.652193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.652206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.652292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.652307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.652393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.652409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.652485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.652501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.652658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.652673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.652813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.652829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.652909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.652922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.652998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.653012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.653095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.653108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.653196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.653210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.653287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.653301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.653382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.653396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.653529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.653543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.653679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.653693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.653772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.653787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.653859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.653872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.653948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.848 [2024-12-09 11:15:18.653962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.848 qpair failed and we were unable to recover it. 01:04:17.848 [2024-12-09 11:15:18.654096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.654110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.654178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.654192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.654408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.654422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.654508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.654521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.654607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.654620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.654777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.654791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.654882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.654896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.654965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.654979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.655054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.655067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.655143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.655157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.655240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.655254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.655389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.655403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.655494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.655508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.655653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.655668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.655755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.655769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.655867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.655881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.655954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.655967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.656042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.656056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.656197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.656211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.656293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.656306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.656438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.656452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.656529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.656543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.656637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.656654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.656747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.656761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.656835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.656848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.656923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.656937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.657001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.657016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.657145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.657158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.657244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.657258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.657347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.657362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.657446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.657460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.657535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.657549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.657625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.657638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.657714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.657728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.657812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.657825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.657910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.657923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.658004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.658020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.658190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.658203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.849 qpair failed and we were unable to recover it. 01:04:17.849 [2024-12-09 11:15:18.658333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.849 [2024-12-09 11:15:18.658346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.658436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.658450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.658544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.658557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.658640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.658659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.658746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.658759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.658848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.658862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.658939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.658952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.659043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.659058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.659200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.659213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.659307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.659320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.659392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.659406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.659480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.659493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.659558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.659572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.659661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.659677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.659753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.659766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.659859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.659873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.659941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.659954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.660027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.660041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.660155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.660168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.660242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.660255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.660330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.660344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.660416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.660431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.660560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.660573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.660710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.660725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.660802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.660815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.660884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.660897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.660973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.660987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.661126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.661139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.661238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.661256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.661331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.661345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.661417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.661432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.661498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.661512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.661586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.661600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.661681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.661696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.661782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.661796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.661925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.661939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.662074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.662089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.662165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.662178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.662246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.850 [2024-12-09 11:15:18.662260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.850 qpair failed and we were unable to recover it. 01:04:17.850 [2024-12-09 11:15:18.662346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.662359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.662501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.662514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.662588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.662601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.662739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.662754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.662823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.662837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.662902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.662916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.662985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.662999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.663087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.663102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.663177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.663191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.663278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.663292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.663365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.663379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.663523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.663537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.663614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.663627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.663767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.663781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.663912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.663927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.663997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.664011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.664146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.664162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.664242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.664255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.664339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.664353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.664444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.664458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.664591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.664605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.664670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.664685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.664775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.664788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.664875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.664889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.664969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.664982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.665050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.665064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.665201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.665214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.665301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.665314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.665394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.665408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.665480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.665495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.665631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.665650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.665720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.665735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.665817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.665830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.665897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.665910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.851 [2024-12-09 11:15:18.666036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.851 [2024-12-09 11:15:18.666050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.851 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.666139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.666153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.666225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.666239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.666331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.666344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.666459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.666474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.666546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.666559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.666638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.666659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.666793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.666806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.666947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.666960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.667058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.667071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.667148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.667161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.667297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.667310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.667380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.667393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.667537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.667551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.667620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.667633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.667708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.667721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.667802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.667815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.667940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.667954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.668042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.668056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.668205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.668219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.668289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.668303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.668388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.668404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.668567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.668591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.668702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.668719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.668806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.668821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.668958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.668972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.669056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.669069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.669139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.669152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.669232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.669247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.669382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.669395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.669543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.669557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.669627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.669641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.669843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.669858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.669953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.669966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.670056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.670069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.670145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.670161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.670236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.670251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.670325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.670338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.670425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.670439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.852 [2024-12-09 11:15:18.670541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.852 [2024-12-09 11:15:18.670555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.852 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.670756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.670771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.670847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.670861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.671005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.671018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.671167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.671181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.671249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.671262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.671375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.671391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.671496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.671512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.671583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.671597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.671739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.671754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.671848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.671862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.671958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.671972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.672060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.672073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.672165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.672180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.672261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.672275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.672346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.672359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.672447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.672461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.672535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.672548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.672625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.672638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.672713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.672727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.672862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.672876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.672958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.672972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.673169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.673183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.673276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.673298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.673445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.673460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.673601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.673616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.673697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.673712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.673847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.673861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.674008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.674053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.674192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.674236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.674382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.674424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.674603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.674656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.674859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.674903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.675151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.675194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.675347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.675360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.675512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.675526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.675612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.675628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.675734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.675749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.675914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.675927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.853 qpair failed and we were unable to recover it. 01:04:17.853 [2024-12-09 11:15:18.676000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.853 [2024-12-09 11:15:18.676013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.676156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.676169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.676271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.676284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.676426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.676439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.676513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.676526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.676615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.676628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.676698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.676712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.676787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.676800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.676883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.676897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.676969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.676984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.677126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.677140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.677224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.677247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.677339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.677355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.677513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.677530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.677620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.677636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.677729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.677745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.677836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.677852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.677959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.677975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.678066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.678083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.678227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.678243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.678334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.678351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.678485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.678502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.678573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.678589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.678682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.678699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.678787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.678809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.678967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.678982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.679085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.679099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.679171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.679185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.679262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.679276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.679375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.679388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.679473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.679487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.679621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.679635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.679775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.679789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.679865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.679879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.680015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.680028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.680123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.680136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.680238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.680252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.680325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.854 [2024-12-09 11:15:18.680339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.854 qpair failed and we were unable to recover it. 01:04:17.854 [2024-12-09 11:15:18.680426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.680440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.680512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.680526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.680682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.680728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.680879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.680924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.681084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.681126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.681281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.681301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.681404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.681418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.681562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.681581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.681671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.681686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.681850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.681893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.682031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.682073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.682303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.682349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.682464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.682478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.682621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.682634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.682714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.682728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.682872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.682885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.682966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.682980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.683069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.683082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.683177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.683192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.683283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.683298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.683370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.683383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.683521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.683535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.683616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.683629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.683787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.683801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.683872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.683885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.683958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.683971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.684084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.684099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.684173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.684187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.684279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.684294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.684379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.684393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.684477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.684491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.684572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.684586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.684659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.684673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.684764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.684778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.684863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.684878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.684972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.684985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.685134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.685147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.685288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.685302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.685380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.855 [2024-12-09 11:15:18.685394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.855 qpair failed and we were unable to recover it. 01:04:17.855 [2024-12-09 11:15:18.685467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.685481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.685574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.685587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.685802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.685816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.685889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.685903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.686039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.686055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.686132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.686151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.686234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.686251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.686352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.686366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.686516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.686530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.686615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.686628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.686723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.686737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.686874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.686888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.687027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.687040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.687113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.687127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.687269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.687284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.687370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.687384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.687518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.687531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.687607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.687621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.687701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.687715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.687787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.687800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.687941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.687957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.688103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.688117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.688188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.688201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.688290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.688303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.688391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.688405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.688540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.688554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.688753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.688767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.688863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.688880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.688959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.688972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.689042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.689054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.689126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.689140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.689284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.689297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.689372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.689386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.689472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.689485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.856 [2024-12-09 11:15:18.689568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.856 [2024-12-09 11:15:18.689582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.856 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.689682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.689697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.689769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.689782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.689887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.689901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.689970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.689983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.690058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.690072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.690160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.690174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.690252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.690266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.690408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.690422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.690492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.690506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.690596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.690611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.690704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.690718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.690797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.690810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.690877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.690890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.690982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.690997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.691075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.691088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.691178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.691194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.691270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.691284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.691354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.691368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.691434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.691447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.691535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.691548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.691623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.691637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.691718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.691733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.691812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.691826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.691964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.691978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.692110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.692124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.692203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.692217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.692309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.692323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.692398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.692411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.692559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.692573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.692717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.692731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.692807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.692821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.692903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.692916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.692992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.693010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.693083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.693097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.693168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.693181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.693318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.693331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.693411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.693424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.693508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.693522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.857 qpair failed and we were unable to recover it. 01:04:17.857 [2024-12-09 11:15:18.693607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.857 [2024-12-09 11:15:18.693621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.693699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.693712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.693908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.693922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.693994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.694008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.694141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.694155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.694240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.694254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.694386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.694400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.694476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.694491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.694576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.694590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.694725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.694740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.694806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.694819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.694891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.694905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.694983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.694996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.695076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.695090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.695163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.695177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.695251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.695263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.695339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.695353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.695441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.695455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.695537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.695551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.695687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.695702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.695850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.695864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.695950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.695964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.696100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.696114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.696199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.696213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.696285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.696299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.696382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.696396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.696468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.696482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.696556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.696570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.696652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.696667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.696734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.696748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.696901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.696916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.697055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.697068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.697197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.697211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.697292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.697306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.697394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.697409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.697547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.697561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.697650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.697665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.697757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.697771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.697854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.858 [2024-12-09 11:15:18.697867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.858 qpair failed and we were unable to recover it. 01:04:17.858 [2024-12-09 11:15:18.697952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.697965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.698056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.698070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.698149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.698162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.698243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.698256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.698339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.698352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.698433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.698446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.698596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.698610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.698703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.698718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.698790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.698803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.698893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.698907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.698984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.698997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.699070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.699084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.699220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.699233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.699301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.699314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.699464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.699477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.699551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.699564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.699652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.699666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.699757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.699771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.699849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.699863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.699934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.699948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.700019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.700033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.700136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.700150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.700237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.700251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.700326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.700339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.700425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.700439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.700514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.700527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.700602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.700616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.700691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.700705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.700839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.700853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.700921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.700935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.701018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.701032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.701169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.701184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.701262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.701275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.701355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.701368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.701439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.701453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.701527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.701543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.701676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.701690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.701769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.701784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.701854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.859 [2024-12-09 11:15:18.701867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.859 qpair failed and we were unable to recover it. 01:04:17.859 [2024-12-09 11:15:18.701962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.701977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.702061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.702075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.702158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.702172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.702319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.702363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.702521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.702565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.702716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.702761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.702984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.703027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.703174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.703219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.703388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.703401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.703554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.703567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.703704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.703718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.703807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.703820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.703889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.703902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.704008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.704023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.704116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.704129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.704208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.704222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.704306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.704320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.704390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.704403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.704544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.704558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.704652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.704667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.704756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.704770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.704914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.704927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.705003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.705017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.705112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.705126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.705239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.705282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.705424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.705469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.705685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.705737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.705882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.705926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.706066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.706109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.706323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.706368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.706528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.706542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.706635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.706652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.706796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.706810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.706895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.706908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.706999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.860 [2024-12-09 11:15:18.707013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.860 qpair failed and we were unable to recover it. 01:04:17.860 [2024-12-09 11:15:18.707164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.707178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.707372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.707388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.707468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.707512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.707692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.707741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.707898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.707942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.708105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.708120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.708204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.708217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.708298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.708312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.708464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.708478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.708569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.708584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.708665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.708679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.708824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.708838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.708909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.708923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.709002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.709015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.709170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.709184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.709258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.709272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.709423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.709437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.709510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.709524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.709600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.709613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.709703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.709718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.709796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.709810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.709879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.709892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.710035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.710049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.710136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.710149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.710235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.710249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.710336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.710350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.710487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.710501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.710589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.710602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.710696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.710718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.710807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.710822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.710901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.710916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.711002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.711015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.711165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.711212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.711375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.711419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.711576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.711621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.711823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.711868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.712069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.712112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.712263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.712306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.861 [2024-12-09 11:15:18.712434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.861 [2024-12-09 11:15:18.712448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.861 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.712534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.712548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.712684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.712698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.712828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.712843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.712924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.712937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.713014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.713028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.713114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.713128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.713261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.713274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.713354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.713368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.713499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.713513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.713589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.713603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.713678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.713693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.713765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.713778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.713916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.713929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.714131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.714144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.714279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.714293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.714368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.714382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.714481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.714495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.714569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.714582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.714649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.714663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.714738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.714754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.714893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.714907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.715046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.715060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.715146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.715159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.715239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.715253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.715324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.715338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.715405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.715418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.715494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.715508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.715578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.715592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.715687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.715701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.715863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.715886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.715981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.715997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.716082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.716096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.716246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.716261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.716345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.716359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.716426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.716440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.716543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.716556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.716627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.716641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.716736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.716749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.862 qpair failed and we were unable to recover it. 01:04:17.862 [2024-12-09 11:15:18.716818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.862 [2024-12-09 11:15:18.716831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.716900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.716913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.717930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.717943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.718072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.718085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.718219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.718233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.718384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.718398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.718484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.718497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.718641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.718659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.718763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.718777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.718848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.718862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.718933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.718946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.719019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.719033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.719175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.719189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.719270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.719284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.719369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.719383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.719523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.719537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.719617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.719631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.719720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.719734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.719804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.719817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.719909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.719964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.720106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.720151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.720381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.720437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.720551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.720566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.720648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.720663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.720733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.720748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.720820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.720833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.720922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.720936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.721007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.721021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.721114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.721128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.863 qpair failed and we were unable to recover it. 01:04:17.863 [2024-12-09 11:15:18.721198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.863 [2024-12-09 11:15:18.721211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.721346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.721361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.721498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.721511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.721586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.721606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.721690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.721705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.721801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.721818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.721901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.721914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.721982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.721997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.722084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.722098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.722192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.722205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.722279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.722292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.722366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.722380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.722454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.722468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.722558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.722572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.722652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.722667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.722804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.722818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.722889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.722902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.723046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.723060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.723147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.723161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.723229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.723242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.723325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.723339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.723421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.723435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.723571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.723584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.723665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.723679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.723774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.723787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.723919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.723933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.724024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.724038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.724115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.724130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.724205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.724219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.724300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.724313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.724389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.724402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.724489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.724504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.724647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.724664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.724737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.724751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.724819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.724832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.724906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.724920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.725003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.725018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.725104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.725120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.864 qpair failed and we were unable to recover it. 01:04:17.864 [2024-12-09 11:15:18.725204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.864 [2024-12-09 11:15:18.725218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.725349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.725366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.725457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.725472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.725541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.725555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.725703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.725718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.725790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.725805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.725880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.725894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.725980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.725995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.726077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.726092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.726242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.726258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.726336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.726350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.726418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.726431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.726510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.726524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.726596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.726611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.726685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.726700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.726833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.726847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.726918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.726932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.727934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.727947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.728042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.728056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.728131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.728145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.728252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.728266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.728410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.728424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.728507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.728520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.728584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.728598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.728672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.865 [2024-12-09 11:15:18.728688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.865 qpair failed and we were unable to recover it. 01:04:17.865 [2024-12-09 11:15:18.728755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.728769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.728901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.728915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.729056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.729069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.729220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.729263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.729479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.729527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.729676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.729723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.729885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.729934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.730171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.730217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.730357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.730404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.730486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.730499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.730583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.730596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.730668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.730682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.730818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.730831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.731003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.731046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.731188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.731232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.731433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.731476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.731612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.731625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.731711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.731725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.731805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.731819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.731908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.731922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.732065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.732105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.732308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.732352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.732571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.732614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.732760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.732804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.732954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.732999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.733212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.733261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.733495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.733544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.733693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.733739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.733958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.734002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.734136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.734150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.734232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.734245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.734449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.734500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.734719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.734774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.734997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.735044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.735207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.735222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.735303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.735318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.735460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.735476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.735564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.735578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.866 [2024-12-09 11:15:18.735673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.866 [2024-12-09 11:15:18.735689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.866 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.735765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.735785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.735872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.735885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.735975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.735990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.736066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.736081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.736174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.736188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.736272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.736286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.736440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.736454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.736551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.736564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.736727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.736765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.736981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.737024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.737298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.737346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.737422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.737435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.737521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.737537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.737633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.737651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.737810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.737824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.737905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.737918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.738060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.738076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.738169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.738182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.738280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.738294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.738366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.738385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.738471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.738486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.738663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.738709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.738866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.738919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.739103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.739149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.739373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.739389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.739463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.739476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.739560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.739573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.739715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.739729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.739884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.739897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.739986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.740000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.740081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.740093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.740186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.740199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.740274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.740289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.740422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.740436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.740524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.740537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.740687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.740701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.740782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.740799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.740875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.867 [2024-12-09 11:15:18.740888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.867 qpair failed and we were unable to recover it. 01:04:17.867 [2024-12-09 11:15:18.741028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.741042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.741129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.741146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.741234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.741250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.741402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.741415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.741492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.741505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.741642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.741670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.741748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.741761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.741892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.741906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.741977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.741990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.742066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.742080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.742180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.742194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.742268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.742281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.742346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.742360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.742441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.742454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.742543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.742556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.742650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.742664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.742743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.742756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.742889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.742902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.743123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.743138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.743239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.743257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.743343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.743356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.743494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.743508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.743652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.743667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.743748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.743762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.743839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.743853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.743944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.743958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.744092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.744105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.744177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.744191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.744260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.744275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.744370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.744394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.744488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.744503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.744580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.744596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.744669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.744685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.744774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.744788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.744860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.744873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.744956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.744971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.745043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.745057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.745203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.745217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.745300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.868 [2024-12-09 11:15:18.745314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.868 qpair failed and we were unable to recover it. 01:04:17.868 [2024-12-09 11:15:18.745405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.745419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.745554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.745568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.745659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.745674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.745740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.745759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.745914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.745928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.746004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.746018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.746090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.746104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.746192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.746206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.746346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.746360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.746495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.746509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.746665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.746680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.746826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.746843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.746924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.746939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.747071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.747086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.747171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.747184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.747261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.747276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.747369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.747385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.747529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.747545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.747618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.747632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.747789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.747834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.747999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.748043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.748212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.748260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.748408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.748452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.748619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.748634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.748715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.748731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.748894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.748910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.748991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.749008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.749069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.749102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.749256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.749299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.749463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.749509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.749709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.749725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.749821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.749835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.749912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.749927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.750001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.750015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.750104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.869 [2024-12-09 11:15:18.750118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.869 qpair failed and we were unable to recover it. 01:04:17.869 [2024-12-09 11:15:18.750289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.750335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.750559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.750607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.750837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.750882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.751034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.751078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.751236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.751279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.751434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.751467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.751639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.751659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.751769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.751784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.751859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.751874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.751955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.751969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.752063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.752077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.752160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.752174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.752280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.752324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.752472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.752516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.752681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.752727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.752875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.752919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.753061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.753105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.753255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.753299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.753535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.753579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.753826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.753841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.753927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.753942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.754097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.754111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.754188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.754202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.754336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.754350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.754497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.754510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.754586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.754599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.754739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.754754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.754903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.754952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.755132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.755178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.755390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.755428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.755517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.755531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.755629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.755650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.755735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.755749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.755832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.755847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.755915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.755964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.756117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.756169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.756315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.756359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.756507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.756563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.870 [2024-12-09 11:15:18.756651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.870 [2024-12-09 11:15:18.756666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.870 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.756744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.756758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.756850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.756864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.756949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.756963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.757047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.757061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.757205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.757220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.757301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.757315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.757405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.757418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.757492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.757506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.757584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.757597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.757688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.757703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.757791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.757805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.757961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.757977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.758113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.758127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.758201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.758215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.758316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.758359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.758519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.758565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.758803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.758849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.759011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.759055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.759197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.759241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.759394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.759440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.759515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.759529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.759626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.759640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.759780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.759795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.759888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.759903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.759986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.760001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.760093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.760106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.760188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.760202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.760300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.760315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.760461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.760475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.760562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.760576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.760654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.760669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.760742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.760756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.760838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.760894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.761046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.761098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.761255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.761301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.761449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.761464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.761551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.761568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.761656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.761672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.871 qpair failed and we were unable to recover it. 01:04:17.871 [2024-12-09 11:15:18.761747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.871 [2024-12-09 11:15:18.761761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.761836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.761851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.761996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.762044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.762202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.762248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.762434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.762488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.762627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.762648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.762723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.762737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.762818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.762832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.762965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.762980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.763059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.763073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.763148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.763161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.763272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.763288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.763461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.763476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.763577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.763621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.763873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.763917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.764064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.764106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.764217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.764231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.764357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.764370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.764524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.764537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.764613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.764627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.764767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.764781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.764862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.764875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.764974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.764988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.765058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.765071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.765176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.765191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.765336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.765354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.765498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.765513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.765585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.765599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.765672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.765687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.765781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.765795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.765889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.765903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.766046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.766092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.766248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.766293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.766444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.766490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.766665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.766712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.766875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.766922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.767093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.767139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.767299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.767345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.767578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.767616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.872 qpair failed and we were unable to recover it. 01:04:17.872 [2024-12-09 11:15:18.767826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.872 [2024-12-09 11:15:18.767842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.767922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.767937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.768072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.768086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.768175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.768189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.768333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.768350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.768440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.768455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.768552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.768568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.768718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.768734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.768842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.768898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.769052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.769098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.769328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.769375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.769507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.769521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.769599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.769613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.769699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.769714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.769792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.769805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.769888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.769903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.770059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.770073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.770152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.770167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.770270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.770284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.770422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.770453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.770744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.770789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.770992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.771035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.771253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.771296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.771518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.771532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.771677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.771691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.771844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.771885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.772114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.772163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.772312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.772354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.772443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.772458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.772594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.772607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.772700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.772715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.772858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.772873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.773070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.773083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.773164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.773179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.773265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.773279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.773379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.773423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.773576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.773622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.773787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.773834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.774048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.774109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.873 qpair failed and we were unable to recover it. 01:04:17.873 [2024-12-09 11:15:18.774280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.873 [2024-12-09 11:15:18.774325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.774526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.774569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.774683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.774697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.774776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.774790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.774934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.774947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.775093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.775108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.775196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.775210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.775361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.775375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.775509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.775523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.775609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.775623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.775730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.775745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.775823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.775837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.775914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.775960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.776128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.776174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.776350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.776399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.776611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.776673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.776832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.776878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.777047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.777090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.777349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.777363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.777435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.777449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.777530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.777544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.777641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.777701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.777845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.777888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.778114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.778160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.778364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.778381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.778528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.778545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.778642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.778660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.778750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.778766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.778910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.778961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.779173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.779215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.779448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.779493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.779633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.779653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.779756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.779771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.779968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.874 [2024-12-09 11:15:18.779982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.874 qpair failed and we were unable to recover it. 01:04:17.874 [2024-12-09 11:15:18.780127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.780141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.780228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.780243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.780402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.780445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.780606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.780662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.780818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.780862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.781094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.781157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.781370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.781416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.781578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.781627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.781751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.781767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.781878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.781894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.781984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.781998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.782083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.782097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.782181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.782195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.782288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.782303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.782398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.782411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.782486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.782500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.782637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.782656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.782804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.782817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.782917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.782931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.783035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.783049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.783149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.783166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.783240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.783257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.783340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.783355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.783465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.783508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.783675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.783722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.783940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.783984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.784146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.784192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.784398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.784441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.784714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.784761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.784923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.784967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.785176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.785221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.785367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.785411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.785571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.785617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.785850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.785869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.785980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.786027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.786233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.786280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.786483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.786528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.786671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.786692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.786784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.875 [2024-12-09 11:15:18.786798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.875 qpair failed and we were unable to recover it. 01:04:17.875 [2024-12-09 11:15:18.786878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.786895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.786975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.786989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.787122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.787160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.787366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.787411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.787626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.787697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.787867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.787912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.788137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.788182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.788391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.788406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.788566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.788610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.788777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.788823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.789029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.789075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.789286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.789330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.789477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.789526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.789754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.789801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.789950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.789994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.790162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.790208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.790432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.790475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.790620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.790674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.790945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.790989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.791170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.791216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.791459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.791512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.791736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.791754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.791860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.791875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.791969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.791983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.792079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.792101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.792257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.792272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.792365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.792378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.792470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.792485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.792566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.792580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.792732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.792747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.792821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.792835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.792913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.792926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.793000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.793014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.793098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.793112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.793193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.793208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.793369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.793411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.793575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.793617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.793791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.793837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.876 [2024-12-09 11:15:18.794003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.876 [2024-12-09 11:15:18.794053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.876 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.794270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.794338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.794509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.794554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.794630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.794649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.794742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.794757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.794843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.794856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.794939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.794953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.795114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.795157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.795434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.795477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.795633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.795691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.795857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.795904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.796061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.796106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.796304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.796349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.796498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.796542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.796702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.796755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.796920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.796967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.797207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.797253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.797410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.797431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.797578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.797618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.797863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.797907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.798144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.798190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.798361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.798406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.798631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.798687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.798871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.798888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.799042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.799086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.799249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.799292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.799522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.799565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.799693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.799708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.799855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.799870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.800066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.800080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.800162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.800176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.800261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.800275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.800444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.800485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.800653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.800699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.800872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.800920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.801068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.801114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.801390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.801443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.801585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.801600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.801740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.801754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.877 qpair failed and we were unable to recover it. 01:04:17.877 [2024-12-09 11:15:18.801850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.877 [2024-12-09 11:15:18.801864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.801931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.801946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.802043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.802057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.802192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.802206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.802294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.802308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.802462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.802476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.802564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.802601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.802786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.802830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.802994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.803041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.803193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.803239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.803410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.803459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.803678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.803730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.803918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.803933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.804014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.804029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.804108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.804123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.804193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.804207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.804292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.804307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.804393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.804407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.804487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.804502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.804584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.804599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.804800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.804815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.804890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.804905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.805050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.805094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.805267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.805314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.805538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.805583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.805835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.805883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.806047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.806090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.806310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.806356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.878 [2024-12-09 11:15:18.806578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.878 [2024-12-09 11:15:18.806621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.878 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.806804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.806849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.807070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.807113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.807322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.807365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.807518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.807561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.807740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.807761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.807912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.807928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.808013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.808028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.808166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.808180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.808327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.808341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.808424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.808439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.808552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.808595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.808835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.808882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.809046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.809089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.809250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.809296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.809514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.809560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.809735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.809753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.809851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.809867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.810036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.810082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.810301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.810347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.810571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.810592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.810765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.810811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.810967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.811013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.811160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.811210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.811430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.811445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.811594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.811638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.811800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.811844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.812074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.812121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.812414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.812460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.812693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.812711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.812795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.812809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.812981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.813027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.813246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.813291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.813447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.813463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.813674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.813720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.813880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.813925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.814089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.814132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.814312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.814357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.814574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.814618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.879 qpair failed and we were unable to recover it. 01:04:17.879 [2024-12-09 11:15:18.814808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.879 [2024-12-09 11:15:18.814853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.815071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.815115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.815269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.815312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.815476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.815490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.815575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.815590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.815680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.815694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.815794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.815808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.815916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.815959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.816126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.816169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.816373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.816416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.816585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.816600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.816692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.816708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.817016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.817062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.817220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.817267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.817434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.817478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.817627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.817685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.817886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.817908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.818081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.818128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.818333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.818376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.818519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.818573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.818721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.818736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.818819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.818833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.819032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.819046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.819123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.819138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.819212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.819229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.819341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.819384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.819595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.819638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.819803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.819848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.820021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.820065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.820277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.820323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.820487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.820531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.820695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.820740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.820946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.820965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.821147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.821196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.821415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.821463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.821696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.821742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.821897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.821941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.822097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.822141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.822360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.822375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.880 [2024-12-09 11:15:18.822462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.880 [2024-12-09 11:15:18.822499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.880 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.822709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.822758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.822947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.822994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.823182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.823228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.823496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.823539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.823765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.823785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.823903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.823919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.824052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.824069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.824172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.824215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.824381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.824424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.824567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.824611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.824832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.824875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.825160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.825209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.825369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.825384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.825531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.825546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.825682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.825697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.825895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.825910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.826003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.826017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.826086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.826131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.826341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.826384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.826632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.826706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.826847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.826861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.826942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.826956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.827055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.827069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.827156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.827171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.827245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.827260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.827341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.827356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.827447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.827496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.827660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.827704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.827921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.827975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.828141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.828188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.828368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.828416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.828601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.828618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.828705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.828720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.828887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.828930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.829080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.829123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.829288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.829345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.829431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.829445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.829535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.881 [2024-12-09 11:15:18.829549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.881 qpair failed and we were unable to recover it. 01:04:17.881 [2024-12-09 11:15:18.829690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.829705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.829876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.829919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.830197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.830245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.830410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.830453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.830668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.830713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.830876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.830921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.831209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.831271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.831476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.831520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.831691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.831738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.831891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.831905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.832042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.832056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.832203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.832217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.832302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.832316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.832405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.832422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.832510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.832524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.832654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.832669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.832745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.832759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.832857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.832871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.832973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.832987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.833063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.833078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.833153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.833168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.833239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.833253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.833387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.833401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.833501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.833515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.833661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.833677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.833827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.833871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.834021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.834064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.834231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.834275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.834503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.834517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.834593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.834629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.834802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.834847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.835062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.835106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.835254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.835298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.882 qpair failed and we were unable to recover it. 01:04:17.882 [2024-12-09 11:15:18.835512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.882 [2024-12-09 11:15:18.835556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.835772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.835787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.835930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.835944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.836093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.836108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.836240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.836255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.836385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.836400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.836548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.836562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.836640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.836712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.836873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.836916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.837066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.837110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.837276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.837323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.837479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.837496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.837592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.837610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.837704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.837720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.837829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.837843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.837994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.838008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.838088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.838101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.838172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.838187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.838324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.838338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.838438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.838453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.838532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.838550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.838631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.838651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.838787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.838801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.838894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.838909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.839056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.839070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.839207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.839222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.839291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.839305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.839442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.839457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.839538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.839551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.839614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.839628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.839715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.839755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.839976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.840023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.840185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.840231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.840379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.840394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.840504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.840518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.840596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.840611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.840764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.840779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.840869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.840884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.840974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.883 [2024-12-09 11:15:18.840988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.883 qpair failed and we were unable to recover it. 01:04:17.883 [2024-12-09 11:15:18.841073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.841087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.841158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.841171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.841304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.841318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.841422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.841436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.841521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.841535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.841621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.841637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.841800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.841815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.841893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.841907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.841979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.841996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.842088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.842103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.842178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.842193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.842281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.842296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.842381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.842395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.842567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.842581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.842666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.842680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.842778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.842792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.842870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.842884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.842969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.842983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.843065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.843080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.843220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.843234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.843399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.843445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.843607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.843670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.843837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.843853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.843956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.843971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.844041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.844055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.844127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.844142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.844279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.844294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.844360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.844374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.844445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.844459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.844619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.844634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.844734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.844749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.844890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.844905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.845055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.845069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.845161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.845175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.845310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.845325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.845480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.845494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.845561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.845575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.884 qpair failed and we were unable to recover it. 01:04:17.884 [2024-12-09 11:15:18.845665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.884 [2024-12-09 11:15:18.845680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.845820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.845835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.845910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.845925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.846073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.846087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.846185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.846199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.846270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.846284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.846377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.846392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.846463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.846478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.846568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.846583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.846650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.846666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.846757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.846771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.846943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.846959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.847041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.847055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.847137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.847151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.847244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.847258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.847352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.847367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.847439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.847453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.847543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.847560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.847634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.847654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.847745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.847761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.847839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.847854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.847925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.847974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.848121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.848164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.848377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.848422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.848545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.848561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.848666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.848680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.848770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.848785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.848855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.848869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.848958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.848972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.849066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.849080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.849155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.849170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.849250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.849264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.849403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.849417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.849482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.849497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.849577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.849591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.849758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.849773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.849854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.849868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.850032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.850047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.850135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.885 [2024-12-09 11:15:18.850185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.885 qpair failed and we were unable to recover it. 01:04:17.885 [2024-12-09 11:15:18.850329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.850371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.850574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.850617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.850836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.850850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.851015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.851030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.851102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.851117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.851268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.851312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.851465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.851507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.851711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.851755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.851879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.851894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.851973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.851987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.852066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.852080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.852215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.852231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.852335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.852358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.852496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.852515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.852684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.852701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.852770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.852785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.852945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.852993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.853151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.853196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.853362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.853415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.853514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.853529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.853599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.853614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.853698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.853712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.853797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.853811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.853895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.853927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.854082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.854126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.854350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.854397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.854589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.854604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.854758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.854803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.854951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.854996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.855215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.855259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.855467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.855481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.855635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.855697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.855903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.855947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.856101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.856153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.856369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.856413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.856568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.856582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.856682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.856697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.856851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.856866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.856968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.886 [2024-12-09 11:15:18.856983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.886 qpair failed and we were unable to recover it. 01:04:17.886 [2024-12-09 11:15:18.857193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.857239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.857403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.857447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.857691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.857729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.857805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.857821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.857919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.857935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.858018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.858033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.858192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.858207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.858282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.858297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.858371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.858385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.858469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.858483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.858619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.858634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.858718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.858733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.858892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.858907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.859068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.859086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.859245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.859261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.859350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.859365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.859440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.859455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.859538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.859552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.859651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.859666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.859803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.859865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.860010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.860075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.860240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.860288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.860461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.860476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.860613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.860628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.860718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.860735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.860816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.860829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.860968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.860983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.861207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.861250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.861396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.861441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.861550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.861569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.861803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.861852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.862024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.862070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.862297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.862340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.862477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.862492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.887 qpair failed and we were unable to recover it. 01:04:17.887 [2024-12-09 11:15:18.862629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.887 [2024-12-09 11:15:18.862649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.862739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.862754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.862857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.862871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.862959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.862973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.863060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.863074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.863149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.863163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.863244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.863258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.863406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.863450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.863599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.863641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.863880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.863926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.864085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.864131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.864314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.864360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.864515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.864559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.864774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.864795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.864962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.865006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.865146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.865190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.865331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.865375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.865582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.865601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.865680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.865726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.865880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.865939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.866166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.866212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.866432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.866476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.866659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.866706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.866921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.866944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.867110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.867159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.888 [2024-12-09 11:15:18.867387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.888 [2024-12-09 11:15:18.867434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.888 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.867668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.867714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.867938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.867953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.868027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.868041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.868116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.868131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.868295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.868310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.868458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.868472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.868555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.868569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.868673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.868687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.868834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.868866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.869029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.869073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.869289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.869333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.869538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.869552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.869648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.869663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.869810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.869825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.869898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.869912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.869999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.870014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.870151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.870165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.870272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.870287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.870429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.870444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.870527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.870542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.870680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.870702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.870791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.870808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.870958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.870972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.871119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.871163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.871317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.871368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.871514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.871560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.871736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.871753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.871832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.871846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.872011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.872055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.872216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.872260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.872423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.872469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.872609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.872623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.872795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.872810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.872903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.872920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.873007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.873022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.873112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.873126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.873281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.873297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.873411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.889 [2024-12-09 11:15:18.873453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.889 qpair failed and we were unable to recover it. 01:04:17.889 [2024-12-09 11:15:18.873616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.873672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.873890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.873941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.874022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.874036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.874180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.874194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.874335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.874350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.874560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.874604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.874777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.874822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.874981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.875024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.875244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.875286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.875560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.875604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.875823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.875867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.876021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.876063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.876228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.876270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.876474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.876516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.876671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.876686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.876771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.876786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.876872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.876888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.876970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.876984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.877182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.877197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.877275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.877289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.877383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.877397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.877545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.877560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.877652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.877670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.877768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.877785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.877950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.877965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.878052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.878066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.878148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.878163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.878236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.878298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.878513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.878559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.878796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.878844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.879013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.879058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.879218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.879266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.879471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.879487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.879627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.879649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.879740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.879755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.879855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.879870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.880015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.880029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.880111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.880126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.890 [2024-12-09 11:15:18.880202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.890 [2024-12-09 11:15:18.880251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.890 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.880418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.880461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.880676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.880721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.880920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.880935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.881091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.881105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.881194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.881208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.881302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.881316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.881392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.881407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.881491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.881506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.881617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.881631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.881731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.881747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.881896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.881911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.882002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.882038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.882201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.882260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.882513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.882564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.882726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.882742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.882818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.882850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.882922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.882936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.883015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.883029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.883246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.883260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.883345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.883360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.883494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.883507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.883594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.883609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.883750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.883765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.883843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.883860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.884002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.884016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.884171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.884215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.884393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.884440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.884602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.884655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.884756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.884770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.884844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.884859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.884956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.884971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.885051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.885065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.885141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.885155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.885228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.885243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.885327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.885341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.885447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.885493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.885637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.885694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.885865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.885909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.891 [2024-12-09 11:15:18.886123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.891 [2024-12-09 11:15:18.886166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.891 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.886382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.886424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.886571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.886613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.886789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.886834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.886982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.887027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.887161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.887203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.887411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.887452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.887597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.887639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.887818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.887832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.887904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.887918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.888009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.888023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.888101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.888115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.888207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.888221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.888309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.888324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.888406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.888419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.888568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.888583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.888664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.888678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.888760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.888774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.888875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.888888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.888969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.888984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.889055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.889069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.889149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.889162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.889247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.889261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.889351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.889365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.889442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.889455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.889597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.889613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.889698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.889712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.889781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.889795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.889873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.889886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.890028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.890072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.890287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.890333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.890538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.890581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.892 [2024-12-09 11:15:18.890836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.892 [2024-12-09 11:15:18.890881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.892 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.891023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.891066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.891289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.891332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.891499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.891541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.891717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.891731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.891822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.891869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.892027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.892070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.892224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.892268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.892467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.892511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.892664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.892709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.892894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.892941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.893077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.893090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.893223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.893237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.893387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.893401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.893520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.893563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.893791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.893835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.893986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.894029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.894191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.894234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.894497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.894540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.894752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.894796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.894936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.894950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.895037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.895051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.895205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.895218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.895361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.895397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.895551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.895565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.895642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.895661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.895796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.895810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.895912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.895926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.896017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.896031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.896118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.896132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.896219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.896233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.896327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.896341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.896411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.896425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.896578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.896593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.896707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.896721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.896812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.896826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.896959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.896973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.897071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.897085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.897244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.897257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.893 [2024-12-09 11:15:18.897389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.893 [2024-12-09 11:15:18.897403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.893 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.897483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.897497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.897662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.897677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.897775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.897789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.897865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.897879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.897972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.897986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.898121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.898135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.898209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.898222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.898294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.898308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.898410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.898424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.898496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.898510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.898665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.898679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.898768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.898783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.898860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.898874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.898946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.898959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.899036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.899050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.899187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.899231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.899383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.899426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.899563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.899607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.899821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.899864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.900004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.900047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.900226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.900271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.900469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.900511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.900670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.900715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.900863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.900877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.900958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.900972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.901099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.901113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.901263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.901278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.901411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.901425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.901498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.901512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.901613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.901627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.901771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.901785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.901944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.901988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.902139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.902182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.902333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.902381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.902536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.902549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.902650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.902664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.902828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.902870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.903094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.903136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.894 [2024-12-09 11:15:18.903357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.894 [2024-12-09 11:15:18.903399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.894 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.903582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.903596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.903668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.903682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.903773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.903816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.903951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.903994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.904198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.904241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.904396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.904438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.904637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.904657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.904794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.904808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.904989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.905033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.905248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.905292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.905449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.905492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.905654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.905668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.905821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.905856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.906008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.906052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.906252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.906294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.906482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.906495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.906595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.906609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.906743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.906758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.906845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.906859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.906942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.906957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.907170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.907213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.907439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.907483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.907631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.907687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.907803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.907829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.907934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.907948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.908039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.908053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.908123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.908137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.908336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.908350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.908454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.908497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.908640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.908692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.908849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.908892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.909179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.909221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.909372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.909414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.909677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.909692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.909840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.909857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.910005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.910048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.910198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.910240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.910468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.910515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.895 [2024-12-09 11:15:18.910626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.895 [2024-12-09 11:15:18.910640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.895 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.910844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.910858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.910995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.911009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.911105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.911119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.911264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.911278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.911424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.911438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.911582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.911596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.911742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.911757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.911893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.911907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.911999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.912013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.912115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.912129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.912210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.912224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.912313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.912327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.912408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.912422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.912585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.912599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.912740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.912754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.912830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.912844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.912926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.912940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.913013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.913026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.913100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.913114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.913200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.913214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.913299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.913312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.913393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.913407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.913487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.913502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.913575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.913589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.913682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.913697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.913851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.913865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.913960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.913974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.914060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.914073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.914160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.914174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.914255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.914269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.914349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.914362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.914429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.914443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.914525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.914539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.914688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.914703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.896 [2024-12-09 11:15:18.914773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.896 [2024-12-09 11:15:18.914787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.896 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.914883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.914900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.914978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.914992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.915081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.915167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.915258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.915344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.915440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.915547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.915637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.915730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.915834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.915920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.915992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.916006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.916106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.916120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.916218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.916232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.916389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.916403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.916476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.916491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.916581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.916595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.916794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.916808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.916964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.916977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.917049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.917063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.917135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.917149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.917300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.917314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.917458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.917472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.917580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.917594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.917733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.917747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.917826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.917840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.917919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.917933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.918070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.918084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.918153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.918167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.918253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.918267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.918357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.918371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.918446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.918460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.918535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.918550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.918654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.918668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.918741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.918755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.918827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.918841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.918990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.897 [2024-12-09 11:15:18.919004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.897 qpair failed and we were unable to recover it. 01:04:17.897 [2024-12-09 11:15:18.919147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.919161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.919318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.919332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.919417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.919433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.919583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.919597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.919763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.919778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.919868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.919882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.919957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.919971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.920057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.920071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.920145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.920159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.920241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.920255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.920336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.920350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.920553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.920567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.920669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.920683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.920757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.920771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.920863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.920877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.920962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.920976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.921121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.921135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.921219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.921233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.921324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.921338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.921416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.921430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.921516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.921529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.921636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.921688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.921829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.921871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.922088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.922130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.922268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.922312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.922514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.922556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.922773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.922788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.922993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.923036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.923271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.923313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.923556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.923643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.923887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.923975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.924145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.924182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.924278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.924293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.924452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.924466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.924603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.924617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.924722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.924737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.924942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.924985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.925134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.925177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.925482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.898 [2024-12-09 11:15:18.925525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.898 qpair failed and we were unable to recover it. 01:04:17.898 [2024-12-09 11:15:18.925709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.925755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.926040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.926083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.926305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.926348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.926569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.926618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.926742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.926757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.926906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.926920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.927062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.927076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.927164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.927178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.927376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.927390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.927499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.927543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.927761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.927806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.928021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.928064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.928269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.928313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.928516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.928559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.928738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.928753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.928896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.928939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.929225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.929268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.929414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.929458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.929609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.929661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.929948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.929991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.930210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.930254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.930519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.930562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.930722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.930767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.931035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.931078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.931282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.931324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.931490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.931545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.931680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.931695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.931780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.931794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.931937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.931951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.932110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.932153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.932440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.932493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.932747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.932801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.932965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.933018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.933270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.933314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.933533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.933577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.933876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.933921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.934035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.934049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.934249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.934264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.934351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.899 [2024-12-09 11:15:18.934367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.899 qpair failed and we were unable to recover it. 01:04:17.899 [2024-12-09 11:15:18.934527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.934569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.934797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.934843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.935134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.935177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.935398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.935441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.935661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.935716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.935965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.935979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.936117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.936131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.936216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.936230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.936306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.936320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.936520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.936534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.936697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.936712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.936817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.936831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.937016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.937059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.937219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.937262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.937548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.937592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.937764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.937809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.937969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.938013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.938258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.938272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.938429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.938444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.938591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.938629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.938840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.938884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.939099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.939142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.939364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.939407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.939570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.939613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.939911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.939956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.940092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.940110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.940273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.940317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.940540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.940584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.940802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.940852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.940991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.941006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.941169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.941213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.941519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.941564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.941813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.941828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.942038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.942081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.942344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.942388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.942687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.942733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.943019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.943062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.900 qpair failed and we were unable to recover it. 01:04:17.900 [2024-12-09 11:15:18.943292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.900 [2024-12-09 11:15:18.943335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.943503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.943547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.943768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.943809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.943909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.943923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.944146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.944189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.944412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.944455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.944735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.944780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.945017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.945067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.945280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.945323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.945542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.945585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.945753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.945768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.945917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.945932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.946105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.946147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.946359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.946402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.946605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.946639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.946802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.946816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.946980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.947028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.947324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.947367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.947654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.947686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.947853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.947867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.948064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.948077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.948177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.948191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.948338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.948352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.948552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.948566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.948717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.948756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.948984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.949027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.949243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.949286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.949453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.949498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.949769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.949783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.949870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.949884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.950111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.950155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.950432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.950475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.950776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.950821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.951106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.951149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.951443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.951487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.951720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.951735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.951948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.901 [2024-12-09 11:15:18.951991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.901 qpair failed and we were unable to recover it. 01:04:17.901 [2024-12-09 11:15:18.952232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.952274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.952561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.952605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.952927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.952971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.953211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.953254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.953565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.953608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.953907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.953952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.954210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.954225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.954463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.954477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.954612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.954626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.954861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.954906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.955172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.955228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.955515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.955558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.955787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.955832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.956121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.956164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.956407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.956451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.956747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.956792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.957097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.957140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.957362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.957406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.957633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.957688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.957972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.958016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.958279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.958322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.958605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.958669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.958991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.959035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.959319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.959333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.959418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.959432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.959603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.959618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.959789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.959804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.959879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.959893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.960047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.960091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.960325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.960368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.960667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.960712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.960920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.960964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.961265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.961278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.961455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.961469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.961621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.961676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.961897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.961940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.962222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.962265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.962515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.962565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.962864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.902 [2024-12-09 11:15:18.962909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.902 qpair failed and we were unable to recover it. 01:04:17.902 [2024-12-09 11:15:18.963152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.963195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.963434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.963477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.963763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.963808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.964009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.964023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.964181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.964224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.964436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.964478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.964775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.964819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.965102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.965145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.965413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.965455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.965721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.965766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.965972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.966016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.966285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.966328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.966601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.966675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.966961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.967005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.967277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.967320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.967604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.967660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.967981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.968014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.968321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.968364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.968666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.968711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.968994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.969036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.969275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.969288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.969453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.969467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.969701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.969746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.969972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.970015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.970269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.970312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.970491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.970535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.970794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.970809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.971036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.971079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.971299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.971343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.971608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.971661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.971936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.971978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.972139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.972153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.972370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.972383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.972537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.972551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.972755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.972781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.972941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.972955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.973183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.973196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.973422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.973436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.903 [2024-12-09 11:15:18.973592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.903 [2024-12-09 11:15:18.973608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.903 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.973756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.973771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.973928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.973942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.974024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.974038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.974188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.974202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.974358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.974400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.974615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.974668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.974885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.974927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.975127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.975141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.975366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.975410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.975677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.975722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.975920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.975935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.976040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.976055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.976212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.976227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.976375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.976390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.976489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.976503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.976625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.976666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.976759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.976773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.976937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.976995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.977140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.977183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.977461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.977503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.977706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.977752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.977986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.978029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.978246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.978289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.978523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.978566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.978875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.978916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.979051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.979065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.979221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.979276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.979499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.979542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.979825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.979880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.980106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.980120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:17.904 [2024-12-09 11:15:18.980287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:17.904 [2024-12-09 11:15:18.980301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:17.904 qpair failed and we were unable to recover it. 01:04:18.244 [2024-12-09 11:15:18.980472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.244 [2024-12-09 11:15:18.980488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.244 qpair failed and we were unable to recover it. 01:04:18.244 [2024-12-09 11:15:18.980630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.244 [2024-12-09 11:15:18.980650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.244 qpair failed and we were unable to recover it. 01:04:18.244 [2024-12-09 11:15:18.980868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.244 [2024-12-09 11:15:18.980884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.244 qpair failed and we were unable to recover it. 01:04:18.244 [2024-12-09 11:15:18.981033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.244 [2024-12-09 11:15:18.981047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.244 qpair failed and we were unable to recover it. 01:04:18.244 [2024-12-09 11:15:18.981204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.244 [2024-12-09 11:15:18.981248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.244 qpair failed and we were unable to recover it. 01:04:18.244 [2024-12-09 11:15:18.981506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.244 [2024-12-09 11:15:18.981549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.244 qpair failed and we were unable to recover it. 01:04:18.244 [2024-12-09 11:15:18.981711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.244 [2024-12-09 11:15:18.981726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.244 qpair failed and we were unable to recover it. 01:04:18.244 [2024-12-09 11:15:18.981876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.244 [2024-12-09 11:15:18.981891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.244 qpair failed and we were unable to recover it. 01:04:18.244 [2024-12-09 11:15:18.982061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.982111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.982336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.982380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.982616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.982669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.982782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.982796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.982979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.983022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.983307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.983350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.983527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.983570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.983802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.983847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.984057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.984101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.984271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.984285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.984447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.984461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.984557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.984572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.984730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.984744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.984900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.984943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.985249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.985292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.985517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.985560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.985866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.985911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.986127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.986142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.986382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.986396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.986621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.986689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.986920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.986935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.987052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.987066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.987228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.987242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.987354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.987368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.987468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.987482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.987576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.987590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.987780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.987826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.988090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.988133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.988375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.988419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.988640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.988697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.988969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.989019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.989127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.989141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.989237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.989253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.989408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.989423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.989560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.989575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.989701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.989716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.989861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.989904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.990132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.990175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.245 qpair failed and we were unable to recover it. 01:04:18.245 [2024-12-09 11:15:18.990346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.245 [2024-12-09 11:15:18.990389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.990601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.990656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.990838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.990887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.991045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.991059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.991169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.991184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.991294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.991309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.991530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.991574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.991828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.991873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.992144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.992188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.992351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.992395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.992606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.992663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.992889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.992933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.993099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.993142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.993426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.993469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.993742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.993788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.994000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.994043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.994273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.994316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.994478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.994522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.994732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.994778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.995010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.995052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.995274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.995318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.995486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.995530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.995767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.995812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.996110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.996125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.996268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.996282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.996505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.996519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.996740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.996785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.997002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.997045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.997262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.997305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.997612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.997669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.997867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.997881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.998081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.998096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.998197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.998212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.998434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.998448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.998656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.998671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.998839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.998853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.998943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.998983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.999221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.999264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.999478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.246 [2024-12-09 11:15:18.999522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.246 qpair failed and we were unable to recover it. 01:04:18.246 [2024-12-09 11:15:18.999672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:18.999717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.000008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.000051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.000223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.000238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.000387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.000404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.000629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.000688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.000923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.000967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.001209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.001252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.001559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.001602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.001896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.001984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.002219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.002266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.002492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.002536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.002749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.002795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.003078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.003122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.003350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.003395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.003544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.003589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.003821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.003836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.004043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.004058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.004232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.004247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.004366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.004409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.004684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.004730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.004976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.004991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.005074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.005088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.005230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.005244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.005467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.005482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.005688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.005703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.005941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.005985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.006159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.006203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.006356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.006399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.006560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.006603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.006905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.006951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.007252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.007297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.007527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.007572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.007871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.007916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.008220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.008264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.008468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.008512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.008815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.008861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.008986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.009001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.009162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.009177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.009335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.247 [2024-12-09 11:15:19.009350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.247 qpair failed and we were unable to recover it. 01:04:18.247 [2024-12-09 11:15:19.009488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.009503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.009692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.009747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.009982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.010026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.010253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.010297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.010523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.010573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.010742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.010787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.011059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.011103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.011247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.011291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.011498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.011542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.011831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.011876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.012094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.012109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.012281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.012324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.012546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.012589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.012846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.012892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.013168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.013212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.013482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.013526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.013696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.013742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.013953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.013967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.014126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.014156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.014455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.014498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.014666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.014711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.014917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.014961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.015124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.015168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.015387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.015402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.015558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.015602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.015831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.015875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.016047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.016091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.016345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.016359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.016459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.016473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.016697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.016712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.016856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.016870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.017034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.017049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.017212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.017227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.017322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.017336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.017501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.017544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.017705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.017751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.018035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.018079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.018351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.018394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.018621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.248 [2024-12-09 11:15:19.018684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.248 qpair failed and we were unable to recover it. 01:04:18.248 [2024-12-09 11:15:19.018949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.018964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.019138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.019152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.019294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.019308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.019426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.019440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.019666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.019682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.019840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.019857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.020050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.020064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.020171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.020186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.020286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.020301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.020458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.020472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.020675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.020690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.020774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.020789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.020863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.020877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.021081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.021096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.021182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.021196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.021336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.021351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.021515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.021560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.021782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.021797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.022037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.022051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.022145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.022160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.022386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.022400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.022564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.022579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.022757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.022804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.023039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.023083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.023376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.023420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.023680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.023725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.023946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.023989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.024205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.024220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.024312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.024326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.024414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.024429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.024586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.024601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.024675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.024690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.024849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.024864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.025029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.249 [2024-12-09 11:15:19.025043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.249 qpair failed and we were unable to recover it. 01:04:18.249 [2024-12-09 11:15:19.025139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.025153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.025331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.025345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.025512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.025555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.025763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.025777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.025843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.025857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.026065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.026080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.026239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.026254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.026473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.026487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.026596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.026610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.026772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.026787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.026886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.026900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.027044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.027061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.027233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.027277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.027496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.027539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.027759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.027814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.028020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.028034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.028176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.028191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.028435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.028479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.028755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.028799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.028962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.029006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.029239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.029254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.029407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.029450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.029689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.029734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.030019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.030063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.030236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.030279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.030556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.030600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.030892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.030948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.031150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.031167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.031335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.031351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.031491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.031538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.031761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.031821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.032043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.032091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.032239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.032254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.032456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.032473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.032573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.032589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.032727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.032742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.032851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.032866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.032971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.032986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.033134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.250 [2024-12-09 11:15:19.033151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.250 qpair failed and we were unable to recover it. 01:04:18.250 [2024-12-09 11:15:19.033227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.033242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.033380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.033395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.033553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.033568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.033727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.033772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.033926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.033970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.034121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.034165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.034337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.034352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.034495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.034510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.034669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.034684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.034773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.034788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.034891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.034905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.034993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.035008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.035088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.035106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.035268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.035311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.035525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.035568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.035789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.035835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.035946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.035960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.036132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.036189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.036364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.036408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.036565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.036609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.036786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.036839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.037117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.037162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.037396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.037411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.037588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.037602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.037850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.037895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.038058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.038102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.038338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.038382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.038544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.038587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.038900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.038945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.039175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.039219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.039428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.039472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.039660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.039706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.039972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.040017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.040268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.040311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.040537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.040580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.040818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.040863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.041099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.041142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.041371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.041414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.041641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.251 [2024-12-09 11:15:19.041709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.251 qpair failed and we were unable to recover it. 01:04:18.251 [2024-12-09 11:15:19.041938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.041987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.042139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.042154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.042327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.042371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.042623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.042680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.042901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.042945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.043119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.043133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.043291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.043335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.043609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.043662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.043947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.043991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.044280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.044324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.044597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.044642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.044899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.044943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.045222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.045236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.045466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.045484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.045698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.045713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.045942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.045957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.046176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.046220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.046493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.046536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.046758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.046803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.047092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.047136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.047431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.047474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.047714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.047760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.048047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.048100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.048285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.048299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.048411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.048437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.048687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.048734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.048898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.048942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.049192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.049236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.049466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.049510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.049741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.049787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.050091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.050106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.050270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.050284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.050517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.050560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.050729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.050774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.051082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.051125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.051422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.051465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.051687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.051732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.052047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.052091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.052335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.052378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.252 qpair failed and we were unable to recover it. 01:04:18.252 [2024-12-09 11:15:19.052620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.252 [2024-12-09 11:15:19.052674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.052912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.052956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.053189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.053233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.053440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.053455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.053614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.053629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.053807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.053822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.053989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.054003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.054090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.054105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.054313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.054327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.054443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.054458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.054636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.054697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.054992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.055036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.055310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.055354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.055623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.055680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.055976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.056019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.056172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.056197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.056369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.056383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.056575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.056618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.056872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.056916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.057201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.057216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.057383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.057397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.057622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.057679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.057908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.057953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.058201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.058215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.058402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.058446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.058739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.058785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.059079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.059123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.059415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.059458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.059759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.059807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.060036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.060080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.060368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.060383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.060604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.060619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.060805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.060820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.060963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.060978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.061217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.061232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.253 qpair failed and we were unable to recover it. 01:04:18.253 [2024-12-09 11:15:19.061399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.253 [2024-12-09 11:15:19.061413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.061673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.061718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.061941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.061985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.062273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.062287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.062513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.062527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.062633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.062652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.062757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.062771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.062856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.062870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.063096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.063110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.063356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.063371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.063581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.063595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.063776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.063790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.063960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.064004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.064316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.064359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.064659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.064704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.064984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.065033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.065330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.065373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.065560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.065603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.065910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.065954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.066245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.066295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.066591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.066635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.066920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.066964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.067193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.067237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.067520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.067535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.067746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.067760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.067988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.068032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.068272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.068316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.068561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.068604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.068821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.068866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.069179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.069222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.069408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.069423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.069582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.069596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.069766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.069811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.070047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.070091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.070371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.070414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.070633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.070688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.071014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.071058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.071358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.071401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.071659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.071705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.254 qpair failed and we were unable to recover it. 01:04:18.254 [2024-12-09 11:15:19.071930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.254 [2024-12-09 11:15:19.071974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.072266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.072318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.072528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.072543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.072710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.072725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.072888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.072902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.073162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.073206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.073503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.073547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.073860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.073906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.074138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.074181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.074356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.074371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.074605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.074657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.074869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.074913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.075227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.075271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.075493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.075536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.075767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.075813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.076114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.076158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.076302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.076316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.076504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.076547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.076778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.076823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.077054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.077102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.077312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.077332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.077481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.077495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.077673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.077718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.077891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.077935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.078179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.078221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.078536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.078580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.078815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.078860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.079156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.079199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.079395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.079409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.079625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.079677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.079979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.080023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.080236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.080279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.080556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.080599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.080840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.080885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.081200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.081256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.081406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.081421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.081640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.081698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.081982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.082040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.082259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.082277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.255 [2024-12-09 11:15:19.082440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.255 [2024-12-09 11:15:19.082457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.255 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.082707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.082758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.083024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.083039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.083239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.083283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.083568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.083612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.083950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.083999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.084266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.084315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.084559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.084606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.084932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.084990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.085168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.085215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.085437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.085452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.085672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.085688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.085778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.085793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.086014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.086029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.086208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.086223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.086399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.086415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.086506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.086522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.086703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.086727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.086842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.086863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.087010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.087028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.087128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.087143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.087321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.087369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.087547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.087592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.087860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.087908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.088202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.088217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.088385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.088400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.088556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.088602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.088815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.088863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.089147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.089162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.089332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.089376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.089614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.089678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.089919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.089969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.090205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.090252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.090379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.090393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.090626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.090641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.090801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.090820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.090986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.091001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.091253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.091268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.091434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.091450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.091711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.256 [2024-12-09 11:15:19.091775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.256 qpair failed and we were unable to recover it. 01:04:18.256 [2024-12-09 11:15:19.091974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.092031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.092273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.092331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.092513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.092560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.092793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.092841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.093170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.093218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.093497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.093513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.093747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.093762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.093923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.093939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.094088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.094107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.094307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.094322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.094471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.094487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.094672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.094720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.094971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.095019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.095209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.095224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.095317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.095332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.095431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.095446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.095620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.095637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.095722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.095737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.095923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.095967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.096145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.096196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.096354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.096401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.096696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.096750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.097059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.097115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.097356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.097403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.097714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.097762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.098014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.098069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.098339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.098386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.098704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.098757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.098924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.098968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.099271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.099315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.099549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.099593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.099866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.099912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.100195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.100239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.100545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.100589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.100903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.100948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.101086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.101100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.101274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.101289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.101452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.101466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.101710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.257 [2024-12-09 11:15:19.101756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.257 qpair failed and we were unable to recover it. 01:04:18.257 [2024-12-09 11:15:19.101994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.102038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.102328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.102385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.102542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.102557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.102800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.102845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.103075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.103119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.103359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.103402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.103711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.103756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.103977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.104021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.104349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.104392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.104639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.104701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.105009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.105053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.105319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.105333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.105504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.105548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.105847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.105893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.106083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.106098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.106275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.106319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.106569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.106613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.106848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.106892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.107038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.107053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.107288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.107331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.107570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.107613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.107969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.108014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.108248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.108292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.108597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.108641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.108869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.108912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.109203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.109246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.109465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.109479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.109663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.109708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.109990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.110034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.110330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.110374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.110671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.110716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.110955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.110998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.111333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.111377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.111696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.111742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.111961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.112005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.258 qpair failed and we were unable to recover it. 01:04:18.258 [2024-12-09 11:15:19.112250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.258 [2024-12-09 11:15:19.112293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.112441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.112455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.112687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.112702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.112886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.112901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.113080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.113095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.113205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.113248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.113502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.113545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.113780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.113825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.114101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.114144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.114347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.114361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.114548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.114591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.114905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.114951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.115175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.115218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.115524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.115567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.115768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.115820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.116045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.116088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.116363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.116378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.116611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.116625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.116859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.116875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.117020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.117035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.117209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.117252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.117468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.117511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.117795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.117841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.118090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.118134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.118360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.118402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.118708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.118753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.118985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.119029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.119276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.119319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.119513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.119528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.119751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.119796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.120077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.120121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.120344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.120388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.120689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.120733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.121032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.121075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.121317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.121360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.121536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.121578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.121835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.121881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.122188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.122232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.122458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.122501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.259 [2024-12-09 11:15:19.122804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.259 [2024-12-09 11:15:19.122849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.259 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.123075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.123120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.123310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.123353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.123622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.123637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.123835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.123849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.124032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.124046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.124258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.124273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.124509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.124551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.124735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.124779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.124930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.124973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.125267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.125310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.125609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.125660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.125893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.125937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.126217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.126268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.126433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.126447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.126669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.126720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.126955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.126999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.127306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.127350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.127677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.127722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.128016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.128061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.128299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.128341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.128640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.128698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.128927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.128972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.129256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.129271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.129444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.129458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.129672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.129687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.129917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.129960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.130257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.130299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.130518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.130532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.130766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.130782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.130954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.130997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.131224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.131267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.131492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.131535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.131749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.131796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.132033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.132076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.132242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.132285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.132586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.132629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.132948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.132992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.133310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.133324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.133420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.260 [2024-12-09 11:15:19.133433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.260 qpair failed and we were unable to recover it. 01:04:18.260 [2024-12-09 11:15:19.133658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.133672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.133840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.133855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.134027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.134071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.134384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.134427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.134636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.134691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.134990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.135034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.135351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.135394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.135702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.135748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.136048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.136092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.136338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.136381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.136627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.136680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.136912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.136957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.137249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.137292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.137604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.137657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.137937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.137979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.138284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.138337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.138567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.138612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.138913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.138958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.139227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.139243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.139451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.139466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.139619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.139677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.139904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.139949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.140237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.140253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.140480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.140497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.140643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.140665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.140877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.140894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.141055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.141111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.141415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.141460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.141672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.141721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.142042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.142087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.142389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.142435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.142664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.142681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.142838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.142854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.143108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.143154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.143400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.143445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.143612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.143629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.143769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.143824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.143991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.144033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.144196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.261 [2024-12-09 11:15:19.144214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.261 qpair failed and we were unable to recover it. 01:04:18.261 [2024-12-09 11:15:19.144383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.144428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.144678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.144725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.144916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.144963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.145159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.145205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.145419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.145435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.145683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.145730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.146039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.146084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.146335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.146351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.146463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.146479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.146666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.146713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.146940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.146986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.147288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.147333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.147566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.147610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.147796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.147841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.148073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.148117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.148351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.148394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.148539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.148558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.148787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.148834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.149117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.149163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.149323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.149368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.149621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.149678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.149896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.149940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.150242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.150287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.150452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.150496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.150726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.150773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.151073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.151118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.151398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.151443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.151745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.151762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.152006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.152021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.152169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.152185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.152372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.152389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.152560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.152577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.152790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.152807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.152971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.152988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.153143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.262 [2024-12-09 11:15:19.153160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.262 qpair failed and we were unable to recover it. 01:04:18.262 [2024-12-09 11:15:19.153330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.153347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.153559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.153575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.153722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.153738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.153909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.153954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.154184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.154229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.154510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.154555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.154853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.154900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.155188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.155232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.155417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.155475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.155671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.155721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.155893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.155939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.156171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.156216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.156448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.156471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.156663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.156709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.156960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.157005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.157315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.157337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.157538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.157560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.157790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.157814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.157996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.158018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.158203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.158226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.158475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.158520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.158732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.158777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.159021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.159080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.159348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.159400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.159617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.159675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.159864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.159909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.160192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.160237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.160443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.160465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.160656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.160678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.160845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.160890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.161074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.161119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.161335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.161388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.161487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.161510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.161740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.161764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.161923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.161946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.162181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.162207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.162435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.162457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.162627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.162653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.162845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.162868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.263 [2024-12-09 11:15:19.163100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.263 [2024-12-09 11:15:19.163145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.263 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.163461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.163507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.163770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.163793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.164018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.164040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.164287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.164310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.164431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.164452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.164680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.164726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.164961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.165007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.165176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.165222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.165545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.165589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.165928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.165978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.166216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.166261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.166518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.166563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.166855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.166902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.167229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.167274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.167555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.167571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.167802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.167819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.168029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.168045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.168204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.168219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.168383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.168427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.168603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.168656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.168885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.168930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.169149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.169194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.169510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.169561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.169799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.169845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.170018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.170063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.170353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.170368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.170514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.170531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.170702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.170748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.171047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.171092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.171386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.171402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.171577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.171594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.171824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.171841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.172088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.172104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.172274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.172290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.172528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.172573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.172839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.172885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.173147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.173192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.173452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.173467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.173624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.264 [2024-12-09 11:15:19.173640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.264 qpair failed and we were unable to recover it. 01:04:18.264 [2024-12-09 11:15:19.173826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.173843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.174016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.174060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.174235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.174280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.174601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.174663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.174981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.175026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.175173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.175219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.175385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.175431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.175668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.175715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.175966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.176010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.176216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.176238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.176434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.176456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.176578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.176599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.176806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.176825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.176991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.177006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.177162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.177206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.177351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.177395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.177570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.177614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.177922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.177968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.178163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.178207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.178438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.178453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.178615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.178632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.178790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.178807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.178990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.179035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.179258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.179310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.179557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.179602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.179886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.179933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.180232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.180276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.180573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.180612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.180855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.180871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.181099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.181115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.181344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.181359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.181569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.181585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.181812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.181828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.182010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.182026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.182206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.182251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.182439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.182485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.182793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.182842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.183090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.183136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.183306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.183322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.265 qpair failed and we were unable to recover it. 01:04:18.265 [2024-12-09 11:15:19.183463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.265 [2024-12-09 11:15:19.183522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.183752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.183799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.184091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.184136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.184396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.184441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.184733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.184750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.185006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.185050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.185363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.185419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.185651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.185667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.185931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.185948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.186133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.186178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.186346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.186391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.186634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.186690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.186921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.186966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.187184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.187201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.187436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.187480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.187719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.187766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.187983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.188028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.188256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.188302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.188537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.188585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.188762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.188779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.188927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.188943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.189087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.189103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.189199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.189216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.189435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.189479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.189656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.189709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.189995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.190039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.190269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.190314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.190461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.190477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.190634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.190656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.190845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.190861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.191007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.191023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.191256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.191300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.191582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.191627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.191974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.192019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.192253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.192298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.266 [2024-12-09 11:15:19.192517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.266 [2024-12-09 11:15:19.192562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.266 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.192882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.192928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.193143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.193188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.193406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.193422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.193633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.193657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.193896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.193912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.194127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.194144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.194326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.194370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.194587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.194631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.194953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.194999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.195321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.195365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.195693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.195740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.195979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.196025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.196309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.196355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.196658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.196704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.197026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.197071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.197242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.197288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.197536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.197582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.197752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.197769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.197937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.197953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.198171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.198216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.198467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.198511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.198790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.198806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.198908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.198925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.199074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.199090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.199444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.199460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.199721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.199738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.199933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.199978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.200218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.200263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.200543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.200596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.200906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.200953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.201210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.201255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.201425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.201469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.201690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.201707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.201864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.201881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.202095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.202111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.202261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.202276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.202372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.202431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.202743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.202790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.267 qpair failed and we were unable to recover it. 01:04:18.267 [2024-12-09 11:15:19.203025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.267 [2024-12-09 11:15:19.203069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.203304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.203349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.203583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.203599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.203771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.203818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.204125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.204170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.204460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.204475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.204625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.204680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.204913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.204958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.205276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.205322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.205618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.205674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.205823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.205868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.206147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.206192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.206416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.206431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.206595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.206612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.206856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.206873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.207052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.207096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.207309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.207353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.207534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.207580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.207850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.207897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.208132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.208176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.208476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.208520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.208760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.208778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.208895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.208911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.208987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.209004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.209237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.209282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.209452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.209497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.209810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.209826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.210000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.210044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.210275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.210320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.210575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.210591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.210805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.210826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.211063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.211109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.211349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.211394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.211613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.211683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.211964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.211980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.212205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.212221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.212402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.212418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.212667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.212713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.212957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.213000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.268 [2024-12-09 11:15:19.213210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.268 [2024-12-09 11:15:19.213254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.268 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.213534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.213585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.213883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.213930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.214164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.214208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.214447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.214491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.214722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.214739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.214951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.214967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.215130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.215146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.215374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.215390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.215549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.215565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.215780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.215797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.215919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.215965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.216266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.216310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.216542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.216587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.216851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.216868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.217031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.217048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.217142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.217158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.217398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.217414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.217562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.217603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.217909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.217972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.218155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.218201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.218393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.218450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.218678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.218694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.218871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.218887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.219115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.219130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.219221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.219237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.219404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.219421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.219625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.219640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.219822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.219867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.220095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.220140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.220439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.220483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.220711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.220759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.221062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.221108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.221350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.221395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.221688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.221735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.222037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.222082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.222301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.222345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.222641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.222698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.222932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.222977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.269 [2024-12-09 11:15:19.223218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.269 [2024-12-09 11:15:19.223263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.269 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.223546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.223591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.223823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.223840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.223997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.224042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.224323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.224368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.224674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.224722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.224983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.225029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.225262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.225296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.225524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.225540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.225705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.225721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.225880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.225896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.226095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.226112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.226189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.226205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.226426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.226471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.226774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.226821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.227120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.227165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.227418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.227462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.227692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.227709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.227948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.227994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.228244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.228295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.228588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.228604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.228762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.228780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.228951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.228996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.229156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.229200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.229496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.229541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.229807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.229854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.230135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.230180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.230474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.230518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.230752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.230769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.230937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.230954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.231185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.231230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.231506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.231522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.231700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.231747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.231982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.232027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.232178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.232222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.232455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.232471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.232642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.232703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.232930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.232975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.233143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.233187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.233404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.270 [2024-12-09 11:15:19.233449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.270 qpair failed and we were unable to recover it. 01:04:18.270 [2024-12-09 11:15:19.233729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.233767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.233928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.233945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.234163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.234209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.234442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.234486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.234648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.234665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.234834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.234877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.235184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.235228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.235538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.235554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.235780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.235797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.236036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.236052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.236150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.236166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.236338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.236382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.236556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.236601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.236900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.236946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.237108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.237152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.237437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.237481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.237605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.237622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.237872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.237912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.238146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.238191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.238490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.238541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.238756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.238802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.238959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.239004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.239236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.239281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.239565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.239608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.239946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.239992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.240181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.240227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.240429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.240445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.240622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.240677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.240977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.241022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.241287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.241332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.241670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.241716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.241906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.241921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.242017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.242033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.242252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.242268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.242441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.271 [2024-12-09 11:15:19.242457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.271 qpair failed and we were unable to recover it. 01:04:18.271 [2024-12-09 11:15:19.242678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.242725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.243042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.243087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.243310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.243355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.243560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.243575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.243745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.243792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.244097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.244141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.244387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.244431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.244664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.244711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.244978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.244995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.245144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.245161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.245376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.245392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.245623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.245681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.245861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.245906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.246143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.246187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.246504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.246549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.246822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.246839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.247073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.247118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.247397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.247442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.247741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.247758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.248037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.248053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.248140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.248156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.248331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.248347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.248490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.248506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.248743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.248789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.249021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.249078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.249366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.249411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.249733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.249779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.250098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.250142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.250359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.250404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.250623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.250683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.250900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.250916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.251013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.251073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.251224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.251268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.251570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.251614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.251811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.251827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.251917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.251933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.252147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.252164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.252327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.252342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.252498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.272 [2024-12-09 11:15:19.252543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.272 qpair failed and we were unable to recover it. 01:04:18.272 [2024-12-09 11:15:19.252866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.252913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.253128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.253173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.253399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.253443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.253664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.253714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.253863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.253880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.254093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.254108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.254302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.254346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.254648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.254665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.254817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.254833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.255010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.255027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.255283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.255327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.255543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.255587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.255783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.255830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.256111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.256155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.256469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.256514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.256748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.256796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.257022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.257066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.257312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.257356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.257577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.257622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.257843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.257859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.257963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.257979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.258200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.258245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.258539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.258583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.258842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.258858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.259030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.259074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.259254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.259306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.259584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.259628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.259969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.260014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.260312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.260357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.260593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.260638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.260877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.260922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.261220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.261265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.261503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.261548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.261733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.261788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.261965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.261981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.262224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.262240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.262475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.262519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.262806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.262853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.263157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.273 [2024-12-09 11:15:19.263202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.273 qpair failed and we were unable to recover it. 01:04:18.273 [2024-12-09 11:15:19.263511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.263556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.263863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.263880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.264046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.264063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.264300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.264344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.264656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.264673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.264834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.264851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.265087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.265132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.265428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.265472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.265709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.265761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.265974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.265990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.266203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.266219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.266431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.266446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.266614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.266669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.266851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.266896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.267195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.267240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.267554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.267598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.267784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.267800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.268031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.268047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.268143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.268159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.268318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.268334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.268499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.268515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.268667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.268714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.268952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.268997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.269221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.269265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.269569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.269614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.269904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.269922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.270022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.270040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.270201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.270216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.270435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.270480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.270710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.270757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.271051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.271067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.271224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.271240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.271334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.271350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.271579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.271596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.271764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.271781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.271941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.271957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.272205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.272249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.272480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.272525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.272760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.274 [2024-12-09 11:15:19.272807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.274 qpair failed and we were unable to recover it. 01:04:18.274 [2024-12-09 11:15:19.272973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.273019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.273311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.273357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.273666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.273712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.274015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.274060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.274308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.274352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.274585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.274629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.274934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.274980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.275209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.275254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.275551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.275596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.275833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.275880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.276060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.276077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.276224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.276240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.276459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.276504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.276784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.276831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.277086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.277131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.277416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.277461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.277698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.277714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.277811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.277827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.277983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.277999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.278179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.278195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.278353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.278368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.278575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.278591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.278772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.278789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.278941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.278958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.279178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.279222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.279500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.279544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.279734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.279788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.279999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.280018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.280165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.280181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.280347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.280392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.280689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.280737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.281040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.281067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.281232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.281249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.281414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.281459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.281690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.281737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.275 [2024-12-09 11:15:19.281961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.275 [2024-12-09 11:15:19.282006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.275 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.282216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.282261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.282560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.282605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.282927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.282973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.283217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.283262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.283494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.283539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.283862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.283909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.284145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.284190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.284489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.284535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.284791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.284838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.285120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.285164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.285378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.285422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.285703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.285750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.285978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.286022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.286237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.286283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.286592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.286637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.286887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.286933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.287184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.287229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.287455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.287500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.287801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.287818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.287987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.288003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.288234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.288250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.288466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.288511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.288744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.288791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.288960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.289005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.289302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.289347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.289656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.289702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.290003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.290049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.290339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.290383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.290546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.290562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.290732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.290749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.290917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.290934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.291164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.291194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.291344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.291360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.291605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.291621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.291795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.291811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.291991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.292006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.292168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.292213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.292439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.292483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.276 [2024-12-09 11:15:19.292712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.276 [2024-12-09 11:15:19.292729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.276 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.292947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.292991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.293220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.293265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.293564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.293609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.293834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.293880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.294118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.294164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.294383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.294427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.294613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.294630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.294798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.294815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.294933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.294949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.295068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.295086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.295190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.295206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.295316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.295332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.296357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.296388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.296585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.296603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.296776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.296793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.296951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.296968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.297158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.297177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.297284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.297300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.297446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.297463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.297664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.297722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.297931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.297957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.298075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.298099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.298289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.298336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.298624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.298684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.298875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.298893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.298993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.299010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.299254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.299271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.299519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.299563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.299789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.299836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.299988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.300004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.300188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.300204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.300399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.300415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.300516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.300567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.300886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.300934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.301169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.301215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.301447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.301493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.301728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.301745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.301917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.301962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.302176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.277 [2024-12-09 11:15:19.302220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.277 qpair failed and we were unable to recover it. 01:04:18.277 [2024-12-09 11:15:19.302432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.302477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.302683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.302700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.302946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.302990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.303174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.303218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.303446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.303492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.303758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.303805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.303996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.304041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.304261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.304305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.304532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.304577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.304807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.304853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.305129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.305145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.305358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.305374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.305540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.305556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.305713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.305730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.305837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.305854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.305940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.305957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.306176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.306221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.306442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.306487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.306778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.306796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.306983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.307027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.307411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.307505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.307740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.307762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.308012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.308031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.308136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.308152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.308266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.308296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.308453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.308470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.308639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.308697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.308929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.308974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.309264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.309308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.309592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.309638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.309929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.309946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.310103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.310148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.310373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.310418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.310638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.310664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.310826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.310872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.311030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.311076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.311314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.311359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.311521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.311566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.311867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.278 [2024-12-09 11:15:19.311914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.278 qpair failed and we were unable to recover it. 01:04:18.278 [2024-12-09 11:15:19.312081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.312097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.312259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.312277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.312455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.312500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.312670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.312718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.312969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.313014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.313319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.313364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.313564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.313609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.313787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.313833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.314149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.314187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.314392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.314409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.314554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.314605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.314839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.314889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.315208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.315227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.315385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.315402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.315592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.315639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.315884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.315932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.316172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.316218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.316455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.316500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.316800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.316848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.317105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.317150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.317387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.317431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.317739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.317847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.318013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.318056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.318328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.318348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.318569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.318615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.318853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.318899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.319072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.319116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.319345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.319391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.319654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.319688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.319834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.319850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.319946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.319962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.320187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.320203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.320364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.320381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.320552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.320599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.320874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.320930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.321201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.321253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.321526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.321577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.321735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.321751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.321886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.321930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.279 qpair failed and we were unable to recover it. 01:04:18.279 [2024-12-09 11:15:19.322164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.279 [2024-12-09 11:15:19.322209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.322505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.322550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.322827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.322874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.323107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.323153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.323389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.323435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.323663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.323708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.323980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.323997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.324178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.324196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.324356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.324373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.324575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.324620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.324811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.324858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.325160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.325205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.325430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.325475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.325788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.325835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.326054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.326098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.326318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.326364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.326543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.326589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.326758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.326775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.326881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.326897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.327111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.327130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.327282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.327299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.327518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.327562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.327834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.327887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.328137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.328155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.328331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.328348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.328496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.328540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.328854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.328900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.329072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.329117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.329396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.329440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.329607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.329623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.329756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.329802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.330033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.330078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.330299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.330344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.330587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.330633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.330949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.280 [2024-12-09 11:15:19.330995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.280 qpair failed and we were unable to recover it. 01:04:18.280 [2024-12-09 11:15:19.331270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.331315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.331627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.331684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.331915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.331969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.332071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.332087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.332249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.332265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.332526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.332570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.332773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.332819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.333048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.333093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.333384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.333429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.333742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.333789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.334023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.334069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.334251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.334295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.334519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.334564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.334809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.334855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.335112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.335157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.335396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.335441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.335764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.335815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.336042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.336058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.336218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.336234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.336462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.336477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.336728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.336774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.336938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.336983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.337167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.337213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.337444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.337488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.337732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.337779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.338060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.338077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.338308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.338324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.338487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.338505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.338662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.338708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.338920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.338964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.339138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.339183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.339478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.339523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.339777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.339794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.339952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.339969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.340129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.340145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.340270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.340298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.340472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.340515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.340743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.340790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.340960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.341005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.281 qpair failed and we were unable to recover it. 01:04:18.281 [2024-12-09 11:15:19.341226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.281 [2024-12-09 11:15:19.341271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.341504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.341549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.341814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.341831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.342001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.342046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.342331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.342377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.342595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.342610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.342778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.342824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.343041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.343087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.343382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.343427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.343732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.343778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.343999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.344015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.344182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.344198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.344354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.344370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.344618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.344634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.344800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.344817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.344973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.345018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.345249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.345294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.345566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.345611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.345925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.345972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.346243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.346259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.346468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.346483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.346660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.346706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.346939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.346984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.347286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.347331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.347517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.347563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.347738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.347784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.347943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.347959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.348139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.348184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.348394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.348444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.348769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.348816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.349107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.349152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.349387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.349433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.349666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.349718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.349884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.349900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.350028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.350073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.350387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.350432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.350655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.350672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.350795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.350840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.351025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.351069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.282 qpair failed and we were unable to recover it. 01:04:18.282 [2024-12-09 11:15:19.351364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.282 [2024-12-09 11:15:19.351408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.351707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.351753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.351982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.351998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.352194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.352240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.352484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.352529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.352755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.352772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.352886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.352903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.353133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.353150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.353405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.353451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.353614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.353674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.353837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.353882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.354092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.354109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.354278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.354324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.354559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.354603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.354851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.354898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.355186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.355231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.355455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.355501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.355665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.355683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.355876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.355921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.356171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.356216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.356492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.356536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.356813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.356861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.357103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.357119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.357283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.357299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.357451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.357467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.357582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.357598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.357763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.357780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.358002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.358047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.358290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.358335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.358549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.358600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.358891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.358937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.359195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.359212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.359300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.359316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.359543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.359560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.359687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.359704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.359789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.359844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.360075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.360120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.360423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.360468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.360686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.360732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.283 [2024-12-09 11:15:19.361032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.283 [2024-12-09 11:15:19.361048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.283 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.361208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.361224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.361443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.361488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.361658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.361705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.361962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.361978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.362127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.362144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.362421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.362465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.362691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.362737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.362970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.362986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.363158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.363173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.363404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.363420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.363529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.363546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.363759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.363776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.363894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.363910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.364151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.364196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.364408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.364452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.364624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.364678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.364934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.364950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.365115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.365131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.365365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.365381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.365544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.365560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.365721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.365767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.366000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.366045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.366363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.366409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.366624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.366679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.366898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.366913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.367146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.367162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.367389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.367405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.367552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.367567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.367728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.367746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.367982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.368034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.368358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.368403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.368556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.368600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.368799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.368846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.369064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.369080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.369247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.284 [2024-12-09 11:15:19.369263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.284 qpair failed and we were unable to recover it. 01:04:18.284 [2024-12-09 11:15:19.369426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.369442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.369601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.369617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.369844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.369891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.370104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.370149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.370429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.370473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.370770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.370817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.370984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.371029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.371201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.371217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.371435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.371451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.371697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.371745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.372026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.372071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.372315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.372331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.372497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.372513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.372753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.372800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.372934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.372950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.373042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.373058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.373208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.373225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.373442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.373488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.373670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.373717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.374015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.374031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.374204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.374249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.374486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.374531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.374751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.374768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.374927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.374972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.375205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.375250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.375501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.375546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.375844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.375892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.376186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.376230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.376528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.376573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.376889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.376938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.377259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.377305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.377615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.377674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.377921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.377966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.378264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.378308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.378569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.378619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.378763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.378780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.378922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.378939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.379042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.379091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.379318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.285 [2024-12-09 11:15:19.379362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.285 qpair failed and we were unable to recover it. 01:04:18.285 [2024-12-09 11:15:19.379605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.379664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.379880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.379925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.380154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.380198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.380494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.380544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.380825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.380842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.381016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.381033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.381254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.381300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.381552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.381601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.381896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.381945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.382117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.382133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.382259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.382317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.382585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.382639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.382865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.382883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.383133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.383194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.383381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.383433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.383678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.383744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.384013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.384067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.384266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.384314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.384547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.384595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.384836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.384854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.385039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.385086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.385391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.385439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.385705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.385755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.386016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.386073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.386346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.386406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.386669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.386730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.387038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.387084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.387324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.387371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.387612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.387684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.387845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.387862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.388011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.388079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.388273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.388320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.388495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.388543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.388847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.388866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.389034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.389054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.389268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.389289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.389448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.389466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.389707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.389754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.389984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.390030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.286 qpair failed and we were unable to recover it. 01:04:18.286 [2024-12-09 11:15:19.390196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.286 [2024-12-09 11:15:19.390248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.287 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.390497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.390547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.390796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.390815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.390970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.390989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.391235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.391252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.391509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.391526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.391654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.391690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.391860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.391878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.391997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.392014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.392176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.392193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.392306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.392323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.392427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.392445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.392704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.392753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.392928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.392974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.393235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.393251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.393431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.393447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.393612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.393629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.393723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.393741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.393839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.393856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.393959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.393976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.394162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.394206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.394440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.394485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.394713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.394761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.394955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.395001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.395166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.395182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.395424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.395468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.395749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.395795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.395947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.395966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.396135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.396187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.396367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.396425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.396668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.396720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.396855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.396871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.396976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.396994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.397162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.397178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.397402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.397448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.605 qpair failed and we were unable to recover it. 01:04:18.605 [2024-12-09 11:15:19.397683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.605 [2024-12-09 11:15:19.397735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.397951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.397975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.398096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.398112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.398231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.398248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.398358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.398374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.398528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.398548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.398712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.398762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.399011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.399059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.399228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.399282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.399581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.399636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.399880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.399897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.400059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.400076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.400175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.400192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.400345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.400362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.400463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.400485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.400638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.400670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.400816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.400837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.400961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.400983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.401199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.401223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.401321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.401338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.401587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.401605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.401775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.401794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.402014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.402032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.402255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.402272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.402440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.402458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.402602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.402624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.402799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.402817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.402985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.403005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.403110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.403126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.403230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.403247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.403426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.403442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.403538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.403555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.403769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.403786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.404015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.404031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.404183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.404200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.404431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.404447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.404680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.404697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.404834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.404851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.606 [2024-12-09 11:15:19.405005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.606 [2024-12-09 11:15:19.405021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.606 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.405125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.405142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.405322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.405338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.405428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.405448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.405680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.405697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.405912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.405929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.406009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.406025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.406171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.406188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.406357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.406373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.406541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.406558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.406704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.406721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.406827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.406844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.406956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.406972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.407138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.407154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.407403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.407419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.407567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.407583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.407760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.407777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.407942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.407959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.408113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.408129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.408211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.408227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.408380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.408396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.408607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.408624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.408740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.408757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.408928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.408944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.409064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.409081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.409241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.409257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.409407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.409424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.409573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.409589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.409744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.409761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.409855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.409873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.410025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.410041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.410199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.410215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.410427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.410444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.410692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.410708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.410954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.410970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.411195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.411212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.411374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.411391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.607 [2024-12-09 11:15:19.411570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.607 [2024-12-09 11:15:19.411587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.607 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.411772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.411789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.411951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.411968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.412200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.412216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.412378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.412395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.412555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.412572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.412730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.412750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.412851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.412867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.412981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.412998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.413085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.413101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.413218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.413235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.413324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.413340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.413590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.413607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.413768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.413785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.413961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.413977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.414147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.414164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.414323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.414340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.414490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.414506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.414720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.414737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.414841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.414858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.415022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.415040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.415130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.415146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.415296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.415313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.415492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.415508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.415604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.415621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.415770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.415787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.415888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.415904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.416063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.608 [2024-12-09 11:15:19.416080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.608 qpair failed and we were unable to recover it. 01:04:18.608 [2024-12-09 11:15:19.416319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.416336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.416556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.416573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.416723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.416739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.416871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.416888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.416980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.416996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.417129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.417170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.417348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.417367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.417640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.417700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.417952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.418008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.418157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.418173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.418356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.418401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.418616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.418670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.418981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.419026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.419244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.419289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.419542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.419589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.419829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.419875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.420102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.420148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.420428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.609 [2024-12-09 11:15:19.420472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.609 qpair failed and we were unable to recover it. 01:04:18.609 [2024-12-09 11:15:19.420697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.623 [2024-12-09 11:15:19.420754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.623 qpair failed and we were unable to recover it. 01:04:18.623 [2024-12-09 11:15:19.421054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.623 [2024-12-09 11:15:19.421099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.623 qpair failed and we were unable to recover it. 01:04:18.623 [2024-12-09 11:15:19.421336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.623 [2024-12-09 11:15:19.421381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.623 qpair failed and we were unable to recover it. 01:04:18.623 [2024-12-09 11:15:19.421629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.623 [2024-12-09 11:15:19.421686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.623 qpair failed and we were unable to recover it. 01:04:18.623 [2024-12-09 11:15:19.421921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.623 [2024-12-09 11:15:19.421967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.623 qpair failed and we were unable to recover it. 01:04:18.623 [2024-12-09 11:15:19.422142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.623 [2024-12-09 11:15:19.422187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.623 qpair failed and we were unable to recover it. 01:04:18.623 [2024-12-09 11:15:19.422488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.623 [2024-12-09 11:15:19.422534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.623 qpair failed and we were unable to recover it. 01:04:18.623 [2024-12-09 11:15:19.422776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.623 [2024-12-09 11:15:19.422793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.422926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.422972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.423250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.423294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.423535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.423582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.423833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.423883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.424115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.424131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.424288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.424333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.424618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.424716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.424892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.424910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.425112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.425157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.425350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.425395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.425547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.425591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.425817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.425868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.426104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.426153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.426341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.426356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.426457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.426473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.426716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.426763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.426917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.426962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.427175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.427219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.427514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.427558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.427858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.427950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.428148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.428195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.428446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.428469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.428652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.428676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.428838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.428861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.429065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.429089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.429288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.429307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.429414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.429430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.429594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.429610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.429715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.429732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.429943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.429961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.430104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.430120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.430326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.430344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.430448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.624 [2024-12-09 11:15:19.430464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.624 qpair failed and we were unable to recover it. 01:04:18.624 [2024-12-09 11:15:19.430634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.430657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.430819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.430835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.430983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.431000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.431211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.431229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.431406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.431422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.431566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.431583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.431747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.431764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.431926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.431942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.432153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.432169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.432406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.432423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.432539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.432554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.432794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.432811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.432978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.432994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.433209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.433224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.433393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.433411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.433596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.433613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.433841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.433857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.433986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.434002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.434165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.434182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.434287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.434302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.434449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.434465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.434623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.434640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.434807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.434823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.435034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.435050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.435217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.435233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.435467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.435483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.435576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.435596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.435800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.435818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.435915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.435932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.436143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.436159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.436401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.436418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.436628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.436651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.436747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.436763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.436868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.436884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.437070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.437087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.437296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.437314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.437413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.437431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.437529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.437546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.437689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.625 [2024-12-09 11:15:19.437707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.625 qpair failed and we were unable to recover it. 01:04:18.625 [2024-12-09 11:15:19.437816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.437832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.437933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.437950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.438161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.438178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.438338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.438355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.438502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.438519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.438652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.438669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.438763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.438780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.438932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.438985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.439301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.439347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.439627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.439688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.439855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.439899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.440123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.440167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.440403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.440448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.440673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.440720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.440986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.441032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.441193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.441239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.441536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.441580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.441764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.441797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.441966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.441985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.442160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.442213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.442394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.442446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.442692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.442740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.442904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.442922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.443066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.443083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.443185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.443202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.443441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.443458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.443697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.443714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.443928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.443948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.444094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.444111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.444272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.444288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.444452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.444468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.444573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.444590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.444670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.444687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.444815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.444831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.444979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.444995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.445223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.445240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.445402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.445419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.445677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.445694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.445881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.445898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.626 qpair failed and we were unable to recover it. 01:04:18.626 [2024-12-09 11:15:19.446128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.626 [2024-12-09 11:15:19.446144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.446366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.446386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.446548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.446564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.446761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.446778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.446938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.446955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.447070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.447087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.447185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.447202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.447324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.447341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.447450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.447467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.447560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.447577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.447684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.447701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.447811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.447828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.447930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.447947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.448108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.448124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.448229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.448245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.448412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.448428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.448530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.448546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.448658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.448675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.448860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.448905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.449200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.449245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.449508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.449553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.449809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.449855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.450150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.450194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.450435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.450479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.450671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.450717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.450997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.451042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.451355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.451400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.451669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.451715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.451926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.451944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.452064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.452109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.452406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.452450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.452674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.452722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.452948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.452993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.453223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.453269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.453568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.453613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.453924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.453971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.454138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.454184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.627 [2024-12-09 11:15:19.454428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.627 [2024-12-09 11:15:19.454444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.627 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.454676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.454693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.454818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.454834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.454938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.454954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.455108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.455124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.455299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.455344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.455583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.455628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.455853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.455899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.456113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.456157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.456334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.456379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.456677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.456723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.456857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.456872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.457049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.457095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.457333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.457378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.457609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.457666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.457898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.457944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.458120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.458136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.458305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.458321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.458518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.458534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.458708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.458724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.458894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.458940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.459114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.459158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.459386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.459431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.459717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.459765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.459914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.459959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.460206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.460222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.460377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.460423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.460589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.460633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.460876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.460921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.461197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.461242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.461472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.461517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.461754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.461807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.462033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.462078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.462264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.462280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.628 qpair failed and we were unable to recover it. 01:04:18.628 [2024-12-09 11:15:19.462468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.628 [2024-12-09 11:15:19.462513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.462733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.462779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.462940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.462974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.463197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.463213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.463446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.463462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.463651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.463667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.463769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.463785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.463951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.463967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.464188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.464204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.464284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.464303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.464468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.464514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.464688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.464734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.464967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.465012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.465187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.465203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.465397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.465412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.466527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.466558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.466771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.466834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.467081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.467128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.467443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.467492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.467736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.467799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.467992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.468051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.468168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.468188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.468372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.468388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.468496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.468513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.468629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.468675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.468869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.468919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.469147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.469193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.469362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.469407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.469692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.469741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.469979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.470066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.470203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.470221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.470384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.470400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.470552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.470569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.470798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.470815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.470972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.470988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.471207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.471263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.471525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.471578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.471842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.471890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.472070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.629 [2024-12-09 11:15:19.472092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.629 qpair failed and we were unable to recover it. 01:04:18.629 [2024-12-09 11:15:19.472267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.472290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.472469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.472491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.472717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.472739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.472853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.472874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.473051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.473074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.473240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.473262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.473434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.473457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.473579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.473601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.473732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.473754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.473912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.473933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.474054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.474076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.474237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.474259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.474496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.474525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.474796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.474817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.474906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.474923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.475072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.475088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.475198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.475216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.475315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.475339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.475495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.475512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.475783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.475800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.475945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.475961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.476053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.476070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.476176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.476193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.476364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.476387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.476550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.476567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.476685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.476706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.476862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.476881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.476983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.476999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.477083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.477102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.477347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.477364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.477459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.477476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.477737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.477755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.477972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.477990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.478169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.478186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.478297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.478313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.478402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.478420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.478633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.478660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.478810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.478827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.478935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.630 [2024-12-09 11:15:19.478956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.630 qpair failed and we were unable to recover it. 01:04:18.630 [2024-12-09 11:15:19.479131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.479179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.479427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.479486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.479812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.479864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.480122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.480167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.480400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.480445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.480618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.480675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.480977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.481021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.481231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.481276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.481434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.481451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.481563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.481579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.481746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.481763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.482018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.482062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.482304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.482348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.482588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.482637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.482897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.482943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.483118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.483161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.483274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.483295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.483475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.483496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.483719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.483803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.484036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.484084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.484330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.484374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.484609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.484668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.484829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.484879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.485106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.485122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.485347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.485363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.485474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.485490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.485640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.485663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.485792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.485808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.485914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.485930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.486091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.486107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.486287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.486333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.486578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.486623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.486871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.486917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.487205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.487250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.487463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.487508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.487761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.487808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.488067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.488083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.488171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.488187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.488420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.631 [2024-12-09 11:15:19.488465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.631 qpair failed and we were unable to recover it. 01:04:18.631 [2024-12-09 11:15:19.488711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.488758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.489047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.489093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.489339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.489385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.489686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.489732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.489969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.490014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.490180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.490225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.490525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.490569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.490786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.490832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.491084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.491100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.491264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.491309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.491590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.491635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.491947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.491991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.492212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.492256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.492592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.492637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.492905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.492957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.493255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.493300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.493536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.493582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.493773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.493819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.494071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.494115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.494341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.494386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.494680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.494726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.495059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.495104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.495321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.495338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.495464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.495509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.495772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.495818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.496031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.496077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.496393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.496436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.496661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.496706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.496945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.496990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.497217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.497233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.497424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.497468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.497639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.497692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.497942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.497987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.498186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.498202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.498383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.498428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.498734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.498780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.499012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.499057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.499229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.499273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.632 [2024-12-09 11:15:19.499551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.632 [2024-12-09 11:15:19.499594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.632 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.499798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.499844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.500009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.500066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.500318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.500334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.500471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.500486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.500661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.500708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.500951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.500995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.501190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.501234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.501467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.501511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.501760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.501806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.502083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.502099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.502343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.502378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.502596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.502640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.502893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.502929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.503094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.503111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.503241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.503285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.503575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.503625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.503964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.504009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.504190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.504216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.504321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.504336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.504501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.504517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.504697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.504744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.504977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.505022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.505257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.505273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.505427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.505443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.505605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.505620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.505750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.505796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.505960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.506004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.506175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.506220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.506441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.506457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.506716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.506761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.506937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.506983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.507144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.507159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.507339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.507383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.507597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.507641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.633 [2024-12-09 11:15:19.507868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.633 [2024-12-09 11:15:19.507914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.633 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.508035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.508051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.508194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.508211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.508380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.508397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.508547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.508592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.508835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.508882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.509106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.509151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.509362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.509407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.509627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.509685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.509923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.509969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.510275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.510322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.510484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.510528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.510698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.510745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.510978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.511022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.511248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.511292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.511522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.511538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.511621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.511678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.511976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.512021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.512339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.512384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.512695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.512742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.512929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.512974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.513162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.513213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.513395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.513438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.513733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.513779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.514079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.514123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.514418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.514461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.514682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.514729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.514955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.515000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.515160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.515202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.515297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.515312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.515517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.515562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.515723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.515769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.515937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.515982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.516108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.516124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.516305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.516349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.516670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.516716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.516891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.516937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.517080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.517096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.517199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.517236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.634 qpair failed and we were unable to recover it. 01:04:18.634 [2024-12-09 11:15:19.517452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.634 [2024-12-09 11:15:19.517498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.517717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.517763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.517942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.517987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.518201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.518245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.518445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.518461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.518619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.518634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.518785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.518801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.518896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.518911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.519048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.519063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.519161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.519177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.519360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.519404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.519554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.519598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.519835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.519880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.520041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.520085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.520346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.520387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.520472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.520487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.520654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.520687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.520780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.520797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.520937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.520981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.521144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.521188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.521494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.521538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.521764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.521811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.522024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.522075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.522310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.522355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.522654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.522670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.522860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.522876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.522986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.523001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.523178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.523194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.523385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.523429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.523671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.523716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.523863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.523908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.524080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.524124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.524344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.524394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.524587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.524602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.524718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.524735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.524833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.524849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.524959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.525003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.525162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.525207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.525467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.635 [2024-12-09 11:15:19.525512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.635 qpair failed and we were unable to recover it. 01:04:18.635 [2024-12-09 11:15:19.525817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.525864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.526051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.526096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.526248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.526264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.526498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.526542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.526823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.526870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.527035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.527083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.527229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.527244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.527344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.527360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.527522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.527538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.527740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.527756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.527954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.527970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.528136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.528151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.528373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.528418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.528716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.528799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.529037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.529082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.529409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.529454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.529765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.529812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.530045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.530090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.530260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.530305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.530477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.530492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.530729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.530775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.530990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.531035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.531219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.531263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.531501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.531552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.531732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.531778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.532013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.532058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.532363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.532408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.532643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.532703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.532866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.532910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.533101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.533145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.533456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.533500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.533724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.533771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.533957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.534003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.534221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.534267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.534399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.534415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.534665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.534682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.636 [2024-12-09 11:15:19.534862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.636 [2024-12-09 11:15:19.534907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.636 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.535104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.535149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.535313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.535358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.535555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.535570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.535736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.535783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.535996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.536042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.536269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.536286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.536427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.536442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.536624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.536679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.536902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.536947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.537129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.537186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.537370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.537386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.537560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.537605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.537796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.537842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.538087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.538102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.538349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.538364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.538525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.538540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.538767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.538783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.538883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.538898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.539009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.539025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.539278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.539322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.539485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.539530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.539778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.539825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.540040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.540084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.540349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.540394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.540682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.540698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.540876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.540892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.541017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.541068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.541309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.541354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.541665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.541711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.541858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.541902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.542120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.542171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.542413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.542428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.542674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.542690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.542835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.637 [2024-12-09 11:15:19.542850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.637 qpair failed and we were unable to recover it. 01:04:18.637 [2024-12-09 11:15:19.542984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.543029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.543258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.543302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.543618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.543679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.543865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.543910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.544083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.544127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.544275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.544291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.544449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.544465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.544623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.544680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.544909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.544955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.545167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.545211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.545367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.545413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.545656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.545703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.545983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.546029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.546258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.546303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.546524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.546568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.546804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.546850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.547013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.547057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.547285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.547331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.547508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.547523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.547699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.547745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.547936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.547982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.548205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.548259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.548437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.548453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.548548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.548600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.548856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.548902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.549088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.549149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.549384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.549399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.549554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.549570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.549674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.549721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.549954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.550000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.550290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.550335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.550493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.550538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.550757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.550810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.551039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.551084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.551249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.551293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.551471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.551488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.551642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.551664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.551802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.551817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.551983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.638 [2024-12-09 11:15:19.551998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.638 qpair failed and we were unable to recover it. 01:04:18.638 [2024-12-09 11:15:19.552118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.552134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.552242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.552258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.552476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.552520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.552739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.552786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.552962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.553010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.553175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.553191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.553363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.553408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.553718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.553764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.554044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.554089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.554231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.554246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.554418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.554461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.554747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.554793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.554987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.555031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.555341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.555356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.555526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.555541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.555652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.555669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.555784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.555800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.555906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.555922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.556029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.556045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.556140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.556155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.556428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.556466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.556637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.556666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.556843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.556889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.557060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.557105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.557336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.557380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.557596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.557641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.557835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.557880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.558082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.558098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.558265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.558310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.558479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.558523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.558766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.558814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.559043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.559088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.559261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.559305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.559563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.559582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.559754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.559772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.559931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.559975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.560146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.560191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.560364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.639 [2024-12-09 11:15:19.560412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.639 qpair failed and we were unable to recover it. 01:04:18.639 [2024-12-09 11:15:19.560624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.560640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.560792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.560809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.560981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.560998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.561105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.561120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.561251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.561295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.561580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.561624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.561799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.561845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.562127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.562181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.562409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.562425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.562585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.562602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.562852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.562870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.562998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.563043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.563206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.563249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.563497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.563553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.563775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.563791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.564018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.564034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.564203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.564248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.564542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.564586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.564861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.564907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.565125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.565171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.565451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.565494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.565663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.565711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.565944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.565990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.566220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.566265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.566496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.566540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.566763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.566811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.567010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.567054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.567243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.567288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.567567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.567612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.567935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.567981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.568158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.568203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.568372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.568417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.568659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.568705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.569003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.569047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.569333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.569378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.640 [2024-12-09 11:15:19.569616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.640 [2024-12-09 11:15:19.569677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.640 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.569961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.570006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.570264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.570314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.570525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.570541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.570685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.570701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.570938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.570983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.571150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.571194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.571455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.571500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.571808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.571855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.572093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.572137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.572454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.572507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.572692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.572708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.572875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.572920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.573138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.573184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.573423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.573468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.573704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.573750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.573989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.574033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.574255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.574299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.574455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.574472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.574552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.574568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.574727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.574743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.574856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.574871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.574987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.575032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.575241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.575285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.575516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.575560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.575828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.575874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.576105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.576149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.576368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.576414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.576662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.576708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.577009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.577055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.577166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.577182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.577412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.577456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.577624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.577677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.577861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.577907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.578210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.578255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.578478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.578519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.578731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.578747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.578902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.578918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.579003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.641 [2024-12-09 11:15:19.579018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.641 qpair failed and we were unable to recover it. 01:04:18.641 [2024-12-09 11:15:19.579110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.579126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.579306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.579324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.579466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.579482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.579654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.579671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.579894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.579909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.579989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.580006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.580122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.580138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.580369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.580415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.580664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.580711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.580875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.580920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.581152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.581195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.581410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.581426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.581605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.581662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.581827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.581871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.582034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.582079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.582229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.582246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.582418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.582463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.582762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.582809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.583048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.583097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.583384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.583400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.583580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.583624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.583836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.583882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.584098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.584113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.584348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.584392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.584673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.584719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.584998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.585042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.585237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.585282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.585559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.585575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.585791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.585807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.585898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.585914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.586016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.586032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.586177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.642 [2024-12-09 11:15:19.586193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.642 qpair failed and we were unable to recover it. 01:04:18.642 [2024-12-09 11:15:19.586420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.586465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.586749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.586796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.587028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.587073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.587299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.587343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.587585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.587601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.587820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.587837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.588058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.588103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.588341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.588386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.588615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.588631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.588737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.588755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.588925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.588941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.589185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.589230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.589441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.589487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.589794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.589811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.590026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.590042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.590153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.590169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.590260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.590276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.590445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.590489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.590710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.590757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.590920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.590963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.591194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.591238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.591517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.591563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.591808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.591853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.592081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.592127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.592425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.592470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.592759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.592777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.592926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.592942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.593126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.593171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.593397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.593441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.593670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.593716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.593954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.593999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.594261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.594277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.594448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.594465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.594629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.594652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.594808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.594824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.594990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.595035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.595318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.595421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.595696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.595722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.595888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.643 [2024-12-09 11:15:19.595910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.643 qpair failed and we were unable to recover it. 01:04:18.643 [2024-12-09 11:15:19.596070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.596116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.596395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.596439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.596599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.596621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.596832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.596881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.597171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.597216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.597386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.597402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.597546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.597562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.597720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.597736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.597926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.597972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.598156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.598200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.598476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.598522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.598807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.598855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.599045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.599089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.599323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.599368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.599620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.599637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.599801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.599847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.600159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.600205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.600445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.600490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.600659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.600706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.600930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.600974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.601139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.601182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.601493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.601538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.601769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.601818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.602056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.602101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.602760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.602800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.603021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.603038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.603197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.603214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.603387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.603403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.603650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.603667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.603882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.603899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.604052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.604069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.604184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.604200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.604343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.604359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.604481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.604498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.604665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.604710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.604876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.604921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.605106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.605152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.605450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.605470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.644 [2024-12-09 11:15:19.605718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.644 [2024-12-09 11:15:19.605735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.644 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.605905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.605921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.606019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.606035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.606131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.606149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.606186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6063f0 (9): Bad file descriptor 01:04:18.645 [2024-12-09 11:15:19.606447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.606489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.606611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.606661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.606868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.606888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.606990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.607006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.607194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.607210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.607318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.607334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.607439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.607455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.607559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.607575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.607749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.607769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.607908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.607925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.608121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.608166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.608495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.608540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.608772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.608789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.608905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.608950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.609101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.609147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.609399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.609451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.609613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.609629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.609758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.609774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.609897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.609913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.610128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.610172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.610407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.610451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.610617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.610673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.610916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.610961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.611134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.611179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.611340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.611357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.611471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.611516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.611768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.611816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.612052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.612098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.612406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.612451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.612614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.612631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.612725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.612741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.612853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.645 [2024-12-09 11:15:19.612869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.645 qpair failed and we were unable to recover it. 01:04:18.645 [2024-12-09 11:15:19.613035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.613050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.613217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.613233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.613478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.613523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.613698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.613745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.613982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.614026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.614209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.614254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.614440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.614484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.614638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.614716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.614936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.614982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.615288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.615332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.615628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.615688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.615853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.615899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.616121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.616166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.616430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.616474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.616774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.616822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.617058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.617102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.617283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.617334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.617561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.617577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.617750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.617766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.617946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.617991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.618232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.618277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.618509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.618554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.618717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.618764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.619001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.619046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.619395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.619439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.619757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.619804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.619979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.620023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.620255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.620271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.620423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.620439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.620678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.620725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.620975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.621021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.621193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.621238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.621448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.621493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.646 [2024-12-09 11:15:19.621748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.646 [2024-12-09 11:15:19.621794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.646 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.622023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.622068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.622233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.622278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.622520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.622565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.622809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.622826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.622996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.623012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.623131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.623147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.623399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.623455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.623635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.623690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.623852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.623897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.624123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.624168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.624454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.624498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.624744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.624760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.624908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.624924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.625136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.625164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.625406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.625450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.625733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.625780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.626031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.626074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.626381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.626429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.626726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.626744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.626905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.626923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.627104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.627148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.627377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.627393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.627588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.627639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.627874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.627919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.628153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.628198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.628425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.628471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.628698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.628745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.628970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.629014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.629231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.629276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.629509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.629553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.629753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.629800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.630033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.630078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.630307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.630352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.630565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.630610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.630829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.630875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.631169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.631213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.647 qpair failed and we were unable to recover it. 01:04:18.647 [2024-12-09 11:15:19.631448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.647 [2024-12-09 11:15:19.631494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.631590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.631606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.631727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.631744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.631978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.631995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.632159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.632175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.632356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.632400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.632570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.632615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.632884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.632930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.633121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.633166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.633472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.633517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.633802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.633848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.634087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.634132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.634367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.634412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.634589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.634635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.634857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.634903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.635122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.635167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.635379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.635424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.635690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.635737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.635930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.635974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.636243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.636288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.636586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.636631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.636875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.636891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.637127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.637143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.637305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.637350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.637584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.637629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.637824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.637869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.638045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.638095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.638341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.638387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.638682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.638728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.638971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.639016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.639329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.639374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.639592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.639636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.639859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.639905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.640192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.640237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.640532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.640549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.640708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.640725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.640942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.640987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.648 [2024-12-09 11:15:19.641223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.648 [2024-12-09 11:15:19.641268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.648 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.641478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.641495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.641721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.641767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.642056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.642101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.642346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.642390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.642543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.642587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.642909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.642926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.643138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.643154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.643384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.643429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.643714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.643761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.643932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.643977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.644203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.644248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.644527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.644572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.644684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.644700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.644862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.644879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.645040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.645056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.645277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.645322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.645506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.645552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.645712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.645758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.645910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.645954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.646248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.646292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.646518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.646562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.646873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.646920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.647198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.647243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.647507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.647524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.647614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.647630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.647756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.647773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.647949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.647965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.648192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.648208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.648390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.648441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.648689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.648736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.648909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.648954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.649124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.649169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.649455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.649499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.649702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.649719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.649850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.649896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.650127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.650172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.650403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.650448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.649 [2024-12-09 11:15:19.650675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.649 [2024-12-09 11:15:19.650723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.649 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.651029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.651073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.651306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.651351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.651627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.651659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.651849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.651895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.652129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.652174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.652481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.652498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.652693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.652711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.652858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.652874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.653020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.653037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.653134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.653150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.653253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.653268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.653359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.653376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.653562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.653607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.653861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.653907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.654203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.654247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.654550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.654594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.654788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.654804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.654898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.654916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.655141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.655158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.655321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.655366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.655590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.655633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.655855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.655871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.656049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.656094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.656313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.656358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.656583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.656620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.656747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.656764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.656919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.656935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.657035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.657051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.657177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.657222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.657455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.657502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.657671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.657715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.657885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.657901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.658003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.658019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.658114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.658130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.658304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.658348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.658572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.650 [2024-12-09 11:15:19.658617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.650 qpair failed and we were unable to recover it. 01:04:18.650 [2024-12-09 11:15:19.658831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.658877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.659050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.659095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.659290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.659343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.659510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.659526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.659637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.659699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.659981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.660026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.660186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.660231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.660466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.660511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.660739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.660786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.661003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.661047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.661216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.661260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.661508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.661525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.661670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.661687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.661871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.661887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.662013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.662028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.662123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.662139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.662313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.662358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.662624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.662683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.662852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.662897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.663111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.663156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.663438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.663482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.663788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.663834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.664030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.664075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.664265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.664311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.664526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.664542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.664711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.664757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.664979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.665025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.665304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.665348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.665563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.665579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.665812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.665858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.666078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.666122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.666380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.666425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.666563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.666579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.666761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.651 [2024-12-09 11:15:19.666808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.651 qpair failed and we were unable to recover it. 01:04:18.651 [2024-12-09 11:15:19.667030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.667081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.667401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.667446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.667771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.667818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.668013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.668058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.668294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.668338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.668632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.668692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.668978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.669023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.669235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.669280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.669592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.669638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.669803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.669820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.669987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.670004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.670197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.670242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.670473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.670518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.670766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.670782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.671033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.671049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.671168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.671184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.671292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.671308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.671458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.671474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.671628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.671706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.671871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.671916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.672124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.672168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.672336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.672381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.672679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.672717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.672863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.672879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.673109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.673126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.673230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.673274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.673464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.673508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.673757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.673804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.673973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.674017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.674305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.674350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.674578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.674624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.674817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.674862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.675143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.675188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.675345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.675390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.675584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.652 [2024-12-09 11:15:19.675628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.652 qpair failed and we were unable to recover it. 01:04:18.652 [2024-12-09 11:15:19.675842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.675859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.675955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.675971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.676198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.676214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.676393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.676409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.676514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.676557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.676733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.676785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.677029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.677074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.677251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.677295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.677611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.677671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.677937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.677982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.678202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.678246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.678469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.678485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.678660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.678707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.678943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.678986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.679207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.679253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.679492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.679537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.679755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.679772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.679990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.680007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.680188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.680205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.680302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.680318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.680415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.680430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.680594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.680610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.680861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.680907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.681154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.681198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.681525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.681540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.681703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.681720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.681939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.681983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.682146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.682190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.682430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.682475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.682704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.682751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.682994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.683045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.683327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.683373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.683697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.683745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.683939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.683985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.684221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.684265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.684480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.684524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.653 [2024-12-09 11:15:19.684808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.653 [2024-12-09 11:15:19.684855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.653 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.685040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.685083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.685320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.685365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.685542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.685558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.685746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.685762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.685935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.685980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.686162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.686206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.686483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.686529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.686776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.686822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.686992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.687042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.687329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.687382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.687549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.687565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.687742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.687758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.687939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.687985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.688221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.688266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.688566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.688610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.688858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.688904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.689060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.689104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.689390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.689434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.689660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.689677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.689817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.689863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.690051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.690096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.690335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.690386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.690537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.690554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.690781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.690797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.690949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.690965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.691196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.691242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.691519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.691563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.691812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.691859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.692115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.692159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.692487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.692503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.692622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.692637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.692881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.692926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.693088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.693132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.693463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.693508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.693738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.693754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.693933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.693978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.694206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.694251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.654 qpair failed and we were unable to recover it. 01:04:18.654 [2024-12-09 11:15:19.694533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.654 [2024-12-09 11:15:19.694577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.694831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.694878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.695174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.695219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.695459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.695504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.695724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.695741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.695961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.696005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.696228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.696272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.696501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.696545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.696831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.696877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.697123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.697168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.697333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.697377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.697530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.697564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.697738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.697755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.697846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.697862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.697980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.697996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.698148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.698193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.698493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.698537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.698770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.698799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.698963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.698979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.699076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.699092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.699178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.699194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.699369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.699414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.699682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.699729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.699955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.699972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.700110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.700156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.700394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.700440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.700674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.700720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.700949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.700993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.701144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.701189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.701415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.701459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.701704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.701751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.702041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.702085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.702322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.702367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.702669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.702715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.702901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.702945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.703096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.703139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.703354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.655 [2024-12-09 11:15:19.703399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.655 qpair failed and we were unable to recover it. 01:04:18.655 [2024-12-09 11:15:19.703701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.703749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.703998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.704043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.704325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.704369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.704613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.704669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.704905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.704921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.705106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.705150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.705474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.705518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.705765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.705782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.705954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.705970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.706115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.706131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.706418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.706464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.706771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.706817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.707046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.707091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.707341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.707387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.707555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.707573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.707733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.707781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.707963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.708008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.708303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.708348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.708541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.708556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.708716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.708733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.708903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.708949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.709131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.709175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.709416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.709460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.709583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.709599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.709841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.709889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.710173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.710218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.710447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.710492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.710675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.710721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.710962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.710979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.711076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.711093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.711265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.711292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.711446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.711461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.711558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.656 [2024-12-09 11:15:19.711574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.656 qpair failed and we were unable to recover it. 01:04:18.656 [2024-12-09 11:15:19.711754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.711801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.712003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.712048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.712268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.712312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.712564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.712608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.712804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.712820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.712990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.713007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.713124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.713140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.713381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.713425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.713588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.713633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.713805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.713851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.713998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.714014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.714194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.714238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.714470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.714515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.714683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.714730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.714938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.714954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.715114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.715159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.715376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.715421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.715661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.715707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.715939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.715984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.716198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.716243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.716479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.716523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.716809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.716863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.717072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.717116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.717301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.717345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.717575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.717620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.717809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.717866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.717976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.717992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.718136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.718151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.718343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.718360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.718532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.718548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.718655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.718672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.718786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.718802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.718953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.718969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.719200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.719217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.719394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.719409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.719511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.719540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.719719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.719738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.719838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.719860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.719989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.720006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.720165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.720181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.657 [2024-12-09 11:15:19.720361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.657 [2024-12-09 11:15:19.720378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.657 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.720547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.720578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.720719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.720735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.720918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.720934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.721050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.721067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.721239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.721254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.721367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.721383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.721495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.721511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.721722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.721765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.721887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.721906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.722075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.722093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.722222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.722240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.722420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.722437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.722545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.722562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.723742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.723776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.724046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.724066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.724236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.724253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.724438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.724455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.724557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.724573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.724680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.724697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.724864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.724880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.724982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.725002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.725100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.725116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.725241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.725259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.725415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.725432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.725597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.725613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.725792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.725809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.725914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.725930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.726008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.726024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.726175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.726191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.726277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.726295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.726391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.726407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.726576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.726594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.726702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.726719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.726873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.726889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.726980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.726997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.727163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.727179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.727282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.727299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.727407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.727424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.727661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.727679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.727777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.727794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.658 [2024-12-09 11:15:19.727902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.658 [2024-12-09 11:15:19.727919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.658 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.728079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.728095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.728183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.728199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.728365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.728382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.728504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.728520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.728615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.728631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.728853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.728870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.729104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.729121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.729319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.729335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.729582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.729598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.729760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.729777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.729942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.729959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.730072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.730089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.730316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.730333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.730427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.730444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.730532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.730549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.730790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.730807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.730964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.730980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.731066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.731082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.731239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.731255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.731412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.731431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.731538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.731555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.731710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.731727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.731953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.731970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.732079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.732096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.732179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.732195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.732355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.732372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.733392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.733426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.733670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.733688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.733849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.733866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.734035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.734052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.734181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.734197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.734368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.734384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.734552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.734569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.734727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.734744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.734905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.734921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.735036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.735053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.735150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.735167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.735360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.735376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.735548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.735564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.735790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.735807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.735913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.735929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.736109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.659 [2024-12-09 11:15:19.736126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.659 qpair failed and we were unable to recover it. 01:04:18.659 [2024-12-09 11:15:19.736279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.736296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.736388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.736404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.736556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.736573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.736725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.736751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.736862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.736882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.737054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.737071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.737264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.737281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.737540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.737557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.737720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.737736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.737892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.737908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.738013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.738030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.738176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.738194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.738363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.738380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.738526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.738543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.738690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.738707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.738869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.738885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.739037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.739053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.739142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.739158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.739415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.739431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.739585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.739601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.739832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.739849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.739947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.739963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.740065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.740081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.740258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.740274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.740447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.740464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.740582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.740598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.740833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.740850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.660 [2024-12-09 11:15:19.740964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.660 [2024-12-09 11:15:19.740981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.660 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.741203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.741221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.741436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.741453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.741615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.741632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.741754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.741773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.741950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.741966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.742069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.742085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.742234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.742251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.742374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.742390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.742601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.742618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.742791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.742808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.742891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.742908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.743067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.743084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.743190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.743205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.743300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.743316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.744315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.744349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.744600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.744617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.984 [2024-12-09 11:15:19.744851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.984 [2024-12-09 11:15:19.744872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.984 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.744951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.744967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.745122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.745138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.745372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.745389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.745547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.745563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.745722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.745739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.745850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.745867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.746039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.746056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.746165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.746181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.746370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.746388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.746471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.746487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.746580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.746597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.746692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.746709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.746827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.746844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.747016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.747032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.747136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.747153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.747252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.747267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.747502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.747519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.747682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.747700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.747815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.747831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.747998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.748014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.748104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.748121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.748240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.748256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.748338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.748354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.748461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.748477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.748635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.748658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.748772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.748789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.748894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.748911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.749124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.749141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.749302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.749318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.749399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.749417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.749521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.749537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.749757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.749774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.749951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.749967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.750146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.750162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.750350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.750367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.750460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.750477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.750588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.750604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.985 [2024-12-09 11:15:19.750781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.985 [2024-12-09 11:15:19.750798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.985 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.750913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.750930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.751028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.751047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.751144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.751159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.751273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.751289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.751380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.751396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.751550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.751567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.751782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.751799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.751914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.751931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.752037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.752054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.752247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.752265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.752357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.752374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.752467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.752483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.752579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.752595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.752703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.752720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.752809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.752825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.752951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.752967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.753116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.753133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.753230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.753246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.753338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.753354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.753502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.753518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.753687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.753704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.753801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.753817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.753921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.753937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.754118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.754135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.754236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.754252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.754364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.754381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.754626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.754661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.754877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.754894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.755009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.755026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.755161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.755178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.755256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.755272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.755501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.755519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.755678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.755696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.755873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.755889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.755977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.755993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.756164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.756180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.756279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.756294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.756459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.756476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.986 [2024-12-09 11:15:19.756578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.986 [2024-12-09 11:15:19.756594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.986 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.756761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.756777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.756883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.756899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.756991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.757009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.757157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.757173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.757278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.757294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.757383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.757399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.757613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.757629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.757807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.757823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.757994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.758011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.758104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.758120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.758212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.758229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.758322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.758337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.758435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.758451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.758619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.758634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.758743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.758760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.758991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.759008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.759109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.759125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.759289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.759306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.759468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.759485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.759650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.759667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.759823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.759840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.759952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.759968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.760143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.760159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.760254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.760270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.760413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.760429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.760667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.760685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.760846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.760862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.761034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.761049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.761144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.761160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.761232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.761248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.761425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.761441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.761542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.761559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.761749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.761765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.761982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.762000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.762114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.762130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.762228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.762244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.762403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.762420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.762635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.762656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.987 [2024-12-09 11:15:19.762762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.987 [2024-12-09 11:15:19.762778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.987 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.762867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.762883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.762986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.763002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.763118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.763135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.763285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.763302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.763449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.763465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.763542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.763557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.763708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.763725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.763819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.763835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.763920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.763937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.764153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.764170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.764404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.764420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.764665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.764682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.764794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.764810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.764911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.764927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.765069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.765085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.765184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.765201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.765293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.765309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.765422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.765439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.765658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.765676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.765776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.765792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.765897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.765912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.766016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.766032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.766137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.766153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.766250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.766265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.766414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.766430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.766526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.766542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.766757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.766774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.766862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.766878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.766958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.766975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.767143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.767159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.767286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.767303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.767399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.767414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.767557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.767574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.767762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.767779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.767940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.767956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.988 [2024-12-09 11:15:19.768032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.988 [2024-12-09 11:15:19.768048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.988 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.768204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.768220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.768380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.768396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.768500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.768517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.768673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.768689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.768833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.768850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.768950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.768966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.769118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.769135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.769284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.769301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.769463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.769479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.769661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.769678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.769751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.769767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.769991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.770006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.770172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.770189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.770356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.770372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.770527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.770544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.770772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.770788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.770950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.770966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.771130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.771147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.771242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.771258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.771354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.771371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.771511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.771527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.771630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.771650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.771758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.771774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.771852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.771869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.771976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.771992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.772084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.772099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.772333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.772349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.772504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.772520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.772605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.772622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.772780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.772796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.772893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.772909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.773002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.773018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.773181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.773198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.773305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.773321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.773473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.773489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.773583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.989 [2024-12-09 11:15:19.773599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.989 qpair failed and we were unable to recover it. 01:04:18.989 [2024-12-09 11:15:19.773835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.773853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.774018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.774034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.774188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.774205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.774312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.774328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.774412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.774428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.774529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.774544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.774697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.774715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.774824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.774841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.774934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.774951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.775049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.775064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.775155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.775171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.775318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.775381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.775613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.775665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.775888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.775931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.776159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.776203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.776432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.776476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.776756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.776803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.777030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.777075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.777303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.777347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.777568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.777612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.777834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.777879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.778176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.778220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.778527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.778543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.778775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.778792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.778973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.778990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.779149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.779166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.779325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.779340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.779437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.779452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.779630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.779686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.779865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.779911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.780163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.780206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.780482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.780526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.780807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.780867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.781103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.781121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.781230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.781246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.781356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.781371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.781532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.781548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.781763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.781779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.781891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.990 [2024-12-09 11:15:19.781907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.990 qpair failed and we were unable to recover it. 01:04:18.990 [2024-12-09 11:15:19.782073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.782100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.782284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.782300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.782512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.782557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.782801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.782847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.783013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.783058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.783271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.783316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.783609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.783661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.783787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.783803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.783927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.783981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.784197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.784241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.784465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.784508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.784791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.784837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.785088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.785139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.785445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.785490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.785643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.785661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.785799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.785815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.785965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.785982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.786125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.786140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.786244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.786271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.786363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.786380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.786586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.786601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.786762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.786778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.786943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.786960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.787053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.787068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.787145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.787162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.787349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.787391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.787622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.787681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.787849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.787898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.788040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.788056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.788219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.788265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.788546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.788590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.788822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.788869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.789184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.789201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.789298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.789314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.789561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.789605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.789888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.789935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.790177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.790194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.790399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.790443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.790767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.790813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.991 qpair failed and we were unable to recover it. 01:04:18.991 [2024-12-09 11:15:19.790999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.991 [2024-12-09 11:15:19.791015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.791144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.791188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.791349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.791392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.791676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.791709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.791853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.791869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.792038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.792082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.792338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.792384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.792684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.792731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.793009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.793053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.793281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.793325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.793604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.793658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.793822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.793866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.794054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.794071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.794241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.794259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.794371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.794387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.794540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.794555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.794641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.794669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.794749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.794765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.794929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.794946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.795056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.795072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.795169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.795185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.795349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.795365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.795505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.795521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.795679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.795695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.795840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.795856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.796026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.796043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.796133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.796149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.796385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.796402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.796669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.796716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.796932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.796976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.797211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.797255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.797486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.797529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.797709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.797755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.797977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.797993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.798147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.798163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.798321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.798337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.798500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.798516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.798699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.798745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.798925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.798969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.992 [2024-12-09 11:15:19.799270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.992 [2024-12-09 11:15:19.799314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.992 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.799603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.799705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.799943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.799961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.800070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.800109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.800339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.800384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.800597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.800641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.800804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.800820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.800923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.800969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.801250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.801295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.801595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.801640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.801937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.801982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.802224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.802269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.802419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.802463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.802628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.802691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.802878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.802933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.803172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.803188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.803332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.803347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.803514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.803554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.803761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.803808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.804046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.804091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.804399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.804445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.804724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.804770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.804922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.804966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.805194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.805239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.805470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.805515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.805783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.805830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.806127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.806173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.806398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.806443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.806669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.806716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.806953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.806998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.807161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.807206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.807483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.807528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.807711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.807757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.807911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.807957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.808134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.993 [2024-12-09 11:15:19.808149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.993 qpair failed and we were unable to recover it. 01:04:18.993 [2024-12-09 11:15:19.808316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.808331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.808481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.808497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.808675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.808692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.808804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.808821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.809035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.809052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.809221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.809237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.809419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.809452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.809712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.809731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.809856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.809871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.810054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.810070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.810216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.810232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.810389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.810405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.810580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.810595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.810703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.810720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.810833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.810849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.811016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.811032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.811130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.811147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.811232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.811248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.811368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.811384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.811469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.811488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.811655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.811672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.811768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.811784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.811948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.811964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.812051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.812067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.812171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.812187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.812344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.812360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.812513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.812529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.812683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.812699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.812813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.812829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.812975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.812992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.813237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.813253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.813435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.813450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.813610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.813626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.813742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.813760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.813852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.813868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.813969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.813986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.814150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.814166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.814380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.814396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.814554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.994 [2024-12-09 11:15:19.814572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.994 qpair failed and we were unable to recover it. 01:04:18.994 [2024-12-09 11:15:19.814736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.814753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.814866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.814881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.815046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.815063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.815235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.815251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.815426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.815442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.815611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.815627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.815787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.815805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.815914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.815929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.816090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.816106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.816189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.816206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.816305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.816320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.816417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.816433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.816539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.816555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.816719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.816735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.816820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.816836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.816985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.817002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.817165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.817182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.817342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.817358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.817513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.817529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.817624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.817640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.817728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.817746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.817919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.817935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.818080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.818097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.818195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.818211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.818335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.818352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.818439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.818454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.818534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.818550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.818703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.818720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.818867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.818885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.818989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.819006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.819161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.819176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.819333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.819349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.819577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.819593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.819752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.819768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.819860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.819876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.819982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.819997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.820174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.820189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.820400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.820416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.820521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.995 [2024-12-09 11:15:19.820537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.995 qpair failed and we were unable to recover it. 01:04:18.995 [2024-12-09 11:15:19.820714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.820730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.820813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.820828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.820947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.820964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.821050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.821066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.821165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.821182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.821344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.821360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.821586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.821602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.821760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.821776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.821889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.821907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.822014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.822030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.822123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.822140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.822283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.822298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.822454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.822470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.822615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.822631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.822739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.822755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.822831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.822847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.823004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.823020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.823242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.823258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.823414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.823430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.823613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.823629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.823749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.823767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.823922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.823942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.824099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.824115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.824348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.824364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.824521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.824537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.824707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.824724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.824867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.824883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.824986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.825002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.825170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.825187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.825413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.825429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.825586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.825602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.825817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.825833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.826045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.826061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.826235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.826251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.826421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.826437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.826545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.826561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.826721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.826737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.826883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.826899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.996 qpair failed and we were unable to recover it. 01:04:18.996 [2024-12-09 11:15:19.827047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.996 [2024-12-09 11:15:19.827063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.827231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.827247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.827459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.827475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.827691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.827708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.827916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.827932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.828094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.828110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.828341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.828357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.828583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.828599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.828755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.828771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.828930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.828947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.829026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.829044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.829275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.829291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.829526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.829542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.829651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.829666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.829811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.829827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.829995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.830011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.830232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.830249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.830338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.830354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.830517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.830532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.830695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.830712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.830810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.830826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.830923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.830938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.831152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.831167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.831381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.831397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.831571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.831587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.831682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.831698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.831798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.831814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.831989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.832005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.832128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.832143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.832309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.832325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.832470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.832486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.832643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.832665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.832759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.832775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.832927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.832942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.833023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.997 [2024-12-09 11:15:19.833039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.997 qpair failed and we were unable to recover it. 01:04:18.997 [2024-12-09 11:15:19.833201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.833217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.833291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.833306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.833521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.833538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.833690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.833706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.833850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.833866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.833956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.833972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.834114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.834130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.834356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.834372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.834538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.834553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.834708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.834724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.834883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.834898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.835057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.835073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.835246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.835262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.835492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.835508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.835734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.835750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.835848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.835865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.836026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.836042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.836198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.836214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.836320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.836336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.836424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.836440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.836668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.836685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.836850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.836866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.836947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.836963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.837175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.837191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.837342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.837358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.837480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.837496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.837657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.837674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.837801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.837817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.837971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.837987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.838083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.838099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.838207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.838223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.838433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.838449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.838678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.838695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.838891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.838907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.839085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.839102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.839276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.839292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.839470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.839485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.839628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.839650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.839806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.839822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.998 qpair failed and we were unable to recover it. 01:04:18.998 [2024-12-09 11:15:19.839993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.998 [2024-12-09 11:15:19.840009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.840245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.840261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.840376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.840393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.840542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.840558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.840656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.840679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.840833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.840849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.841019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.841035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.841136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.841152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.841244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.841260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.841406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.841422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.841534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.841550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.841699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.841715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.841864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.841880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.842048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.842064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.842154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.842170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.842245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.842261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.842421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.842439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.842599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.842615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.842801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.842817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.842911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.842927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.843088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.843104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.843287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.843303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.843459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.843475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.843593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.843609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.843709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.843725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.843826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.843842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.843999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.844014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.844171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.844187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.844369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.844385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.844545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.844561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.844777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.844794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.844893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.844909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.845007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.845023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.845111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.845127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.845355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.845371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.845457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.845473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.845634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.845656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.845878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.845894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.846049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:18.999 [2024-12-09 11:15:19.846064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:18.999 qpair failed and we were unable to recover it. 01:04:18.999 [2024-12-09 11:15:19.846276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.846292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.846451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.846467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.846628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.846648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.846813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.846828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.846934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.846950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.847176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.847192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.847284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.847299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.847456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.847472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.847585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.847602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.847759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.847776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.847941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.847957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.848106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.848122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.848226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.848243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.848340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.848356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.848564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.848580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.848809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.848826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.848975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.848991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.849190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.849210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.849371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.849387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.849599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.849614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.849765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.849781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.849992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.850008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.850163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.850179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.850419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.850435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.850652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.850669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.850822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.850838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.851047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.851062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.851161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.851176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.851392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.851408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.851579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.851595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.851780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.851796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.852017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.852033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.852243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.852259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.852473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.852489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.852588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.852604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.852754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.852770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.852918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.852934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.853036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.853052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.853205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.853221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.000 qpair failed and we were unable to recover it. 01:04:19.000 [2024-12-09 11:15:19.853373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.000 [2024-12-09 11:15:19.853389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.853555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.853570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.853760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.853776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.853918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.853934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.854144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.854160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.854262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.854278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.854418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.854434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.854642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.854673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.854844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.854860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.855082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.855098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.855201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.855217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.855439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.855455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.855598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.855614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.855771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.855787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.855945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.855961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.856105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.856121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.856374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.856389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.856606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.856622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.856858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.856876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.857039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.857054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.857211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.857227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.857435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.857451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.857623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.857640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.857726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.857741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.857898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.857915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.858071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.858087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.858231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.858246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.858320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.858336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.858498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.858514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.858671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.858688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.858774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.858789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.858898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.858914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.859141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.859156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.859305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.859322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.859416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.859432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.859588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.859604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.859716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.859732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.859873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.859888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.860095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.860111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.001 qpair failed and we were unable to recover it. 01:04:19.001 [2024-12-09 11:15:19.860205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.001 [2024-12-09 11:15:19.860220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.860381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.860396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.860537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.860552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.860636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.860657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.860884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.860900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.861072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.861088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.861317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.861333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.861492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.861508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.861673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.861689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.861833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.861848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.862006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.862022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.862177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.862192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.862297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.862313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.862409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.862425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.862598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.862614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.862790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.862807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.863037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.863052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.863278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.863294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.863392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.863408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.863508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.863526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.863600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.863616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.863795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.863811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.863953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.863969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.864129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.864145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.864237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.864253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.864422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.864438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.864542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.864558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.864711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.864728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.864821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.864837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.864939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.864955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.865188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.865204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.865297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.865313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.865455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.865470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.865567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.865583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.865746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.002 [2024-12-09 11:15:19.865763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.002 qpair failed and we were unable to recover it. 01:04:19.002 [2024-12-09 11:15:19.865988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.866004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.866196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.866241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.866401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.866445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.866666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.866712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.866903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.866919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.867078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.867122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.867416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.867460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.867621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.867678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.867898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.867943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.868229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.868245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.868340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.868356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.868524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.868569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.868913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.868959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.869191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.869236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.869459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.869504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.869734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.869781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.869931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.869947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.870111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.870127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.870290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.870306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.870398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.870413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.870622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.870638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.870823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.870839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.870978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.870994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.871095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.871112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.871282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.871332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.871610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.871664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.871899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.871945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.872122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.872167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.872371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.872387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.872622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.872638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.872892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.872938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.873252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.873296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.873578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.873623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.873939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.873984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.874176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.874221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.874377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.874421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.874631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.874689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.874869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.874914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.875198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.003 [2024-12-09 11:15:19.875253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.003 qpair failed and we were unable to recover it. 01:04:19.003 [2024-12-09 11:15:19.875354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.875370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.875575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.875590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.875773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.875813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.876047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.876091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.876302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.876346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.876633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.876688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.876901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.876945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.877250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.877295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.877607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.877662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.877901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.877945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.878164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.878208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.878464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.878479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.878720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.878737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.878897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.878942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.879231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.879276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.879508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.879552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.879707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.879753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.880048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.880092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.880313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.880328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.880553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.880568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.880802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.880819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.880978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.880994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.881138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.881154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.881309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.881324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.881564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.881609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.881855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.881907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.882205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.882250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.882541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.882586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.882831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.882877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.883128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.883143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.883306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.883322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.883478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.883493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.883710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.883756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.883987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.884032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.884253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.884269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.884427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.884443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.884535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.884551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.884774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.884790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.884980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.885025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.004 qpair failed and we were unable to recover it. 01:04:19.004 [2024-12-09 11:15:19.885292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.004 [2024-12-09 11:15:19.885337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.885578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.885623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.885936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.885982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.886225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.886269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.886560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.886605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.886904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.886950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.887278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.887324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.887553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.887599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.887964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.888048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.888255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.888272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.888441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.888486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.888726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.888777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.889077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.889122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.889424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.889510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.889826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.889873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.890119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.890165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.890334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.890349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.890566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.890610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.890959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.891016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.891247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.891262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.891418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.891434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.891667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.891714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.892015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.892060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.892285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.892300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.892462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.892477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.892635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.892693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.892947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.892998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.893286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.893330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.893555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.893599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.893920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.893976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.894136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.894159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.894339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.894361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.894464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.894480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.894642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.894663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.894825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.894841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.895065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.895082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.895243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.895259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.895429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.895445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.895553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.895569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.005 [2024-12-09 11:15:19.895742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.005 [2024-12-09 11:15:19.895758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.005 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.895864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.895880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.896051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.896067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.896162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.896178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.896261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.896277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.896443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.896458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.896619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.896635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.896871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.896887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.897067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.897083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.897312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.897327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.897534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.897550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.897655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.897671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.897836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.897852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.897995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.898012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.898115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.898138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.898317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.898338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.898594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.898627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.898804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.898821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.898981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.898996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.899241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.899257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.899406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.899422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.899574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.899590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.899762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.899778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.899883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.899899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.899983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.899999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.900094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.900110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.900261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.900276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.900507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.900523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.900701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.900717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.900868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.900884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.901068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.901084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.901320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.901335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.901506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.901521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.901729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.901745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.006 [2024-12-09 11:15:19.901970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.006 [2024-12-09 11:15:19.901986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.006 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.902078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.902095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.902294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.902339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.902507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.902551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.902783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.902836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.903089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.903105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.903260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.903276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.903438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.903454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.903544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.903560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.903774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.903820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.903973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.904018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.904293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.904338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.904617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.904674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.904917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.904962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.905094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.905110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.905324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.905368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.905597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.905641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.905952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.906000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.906141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.906156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.906390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.906434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.906576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.906628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.906874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.906919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.907229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.907274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.907527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.907542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.907652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.907668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.907858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.907901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.908124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.908168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.908388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.908433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.908663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.908708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.909011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.909055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.909348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.909392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.909687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.909734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.909966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.910010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.910233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.910277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.910565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.910610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.910923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.910969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.911184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.911229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.911539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.911583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.911747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.911794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.007 [2024-12-09 11:15:19.912043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.007 [2024-12-09 11:15:19.912087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.007 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.912364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.912408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.912718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.912763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.913044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.913080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.913291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.913306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.913532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.913548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.913660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.913693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.913929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.913944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.914110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.914154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.914382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.914427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.914605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.914668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.914914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.914959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.915177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.915224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.915461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.915476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.915639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.915658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.915881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.915896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.916136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.916153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.916330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.916344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.916494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.916539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.916714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.916760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.916975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.917019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.917227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.917284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.917582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.917626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.917936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.917980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.918215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.918259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.918562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.918606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.918786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.918830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.919124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.919167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.919320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.919364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.919633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.919688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.919978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.920022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.920229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.920245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.920410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.920453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.920613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.920668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.920969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.921013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.921247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.921291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.921594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.921638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.921889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.921934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.922170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.922214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.922512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.008 [2024-12-09 11:15:19.922557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.008 qpair failed and we were unable to recover it. 01:04:19.008 [2024-12-09 11:15:19.922825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.922871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.923016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.923031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.923258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.923307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.923537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.923581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.923888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.923934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.924160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.924205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.924441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.924485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.924706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.924752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.925055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.925101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.925246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.925290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.925519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.925562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.925864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.925909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.926135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.926151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.926317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.926362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.926587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.926632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.926856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.926901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.927201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.927244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.927408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.927423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.927638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.927690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.927859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.927904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.928198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.928243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.928441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.928491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.928780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.928826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.929114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.929129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.929289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.929304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.929531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.929546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.929721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.929766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.930058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.930102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.930343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.930387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.930627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.930680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.930914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.930958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.931170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.931214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.931422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.931438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.931607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.931671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.931894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.931939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.932248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.932291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.932535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.932579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.932804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.932850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.933014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.009 [2024-12-09 11:15:19.933029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.009 qpair failed and we were unable to recover it. 01:04:19.009 [2024-12-09 11:15:19.933264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.933308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.933490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.933534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.933810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.933856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.934149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.934193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.934440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.934457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.934609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.934663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.934834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.934878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.935028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.935072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.935268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.935283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.935456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.935500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.935665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.935711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.936004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.936047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.936270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.936314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.936628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.936682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.936981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.937026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.937246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.937261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.937349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.937365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.937601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.937617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.937774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.937790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.938003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.938048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.938281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.938324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.938550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.938595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.938931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.939026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.939312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.939329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.939431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.939447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.939599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.939614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.939788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.939804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.939979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.939994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.940145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.940189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.940488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.940533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.940721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.940768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.941061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.941106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.941321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.941366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.941596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.941640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.941970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.942017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.942312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.942328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.942479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.942494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.942652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.942682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.010 qpair failed and we were unable to recover it. 01:04:19.010 [2024-12-09 11:15:19.942792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.010 [2024-12-09 11:15:19.942807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.942975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.943019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.943203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.943247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.943471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.943515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.943746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.943792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.944100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.944115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.944208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.944224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.944380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.944395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.944552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.944567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.944686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.944732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.944903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.944948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.945294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.945338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.945614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.945669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.945927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.945972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.946191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.946207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.946375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.946419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.946631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.946688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.946920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.946964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.947208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.947252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.947404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.947419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.947562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.947578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.947792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.947808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.948028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.948044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.948219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.948263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.948559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.948609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.948889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.948935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.949116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.949161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.949386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.949402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.949510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.949554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.949797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.949842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.950062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.950106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.950321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.950365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.950575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.950619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.950877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.950922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.951149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.951191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.011 qpair failed and we were unable to recover it. 01:04:19.011 [2024-12-09 11:15:19.951493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.011 [2024-12-09 11:15:19.951538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.951783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.951830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.952118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.952162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.952341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.952387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.952652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.952668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.952770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.952786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.952964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.952979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.953079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.953094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.953236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.953280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.953508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.953552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.953848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.953894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.954058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.954102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.954396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.954441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.954667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.954713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.955022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.955066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.955287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.955332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.955516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.955532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.955610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.955625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.955796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.955813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.955962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.956006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.956236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.956281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.956585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.956628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.956911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.956956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.957134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.957150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.957326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.957371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.957536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.957580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.957817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.957862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.958090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.958136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.958344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.958389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.958583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.958600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.958745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.958791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.959100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.959145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.959375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.959401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.959507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.959523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.959688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.959705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.959807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.959822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.959928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.959944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.960028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.960043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.960248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.960284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.012 [2024-12-09 11:15:19.960565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.012 [2024-12-09 11:15:19.960609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.012 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.960791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.960835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.961109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.961153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.961429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.961474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.961759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.961805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.962039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.962083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.962304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.962319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.962496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.962540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.962781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.962827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.963061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.963105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.963297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.963312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.963466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.963510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.963848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.963933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.964093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.964110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.964269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.964286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.964534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.964551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.964735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.964752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.964938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.964971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.965206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.965224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.965384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.965399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.965571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.965589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.965698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.965714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.965865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.965882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.966106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.966122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.966202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.966218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.966424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.966440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.966622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.966638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.966723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.966739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.966846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.966862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.967071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.967087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.967293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.967311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.967463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.967479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.967717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.967733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.967804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.967819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.967931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.967946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.968111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.968127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.968216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.968231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.968392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.968409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.968550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.968566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.013 [2024-12-09 11:15:19.968830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.013 [2024-12-09 11:15:19.968847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.013 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.968933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.968949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.969091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.969107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.969263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.969279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.969504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.969520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.969732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.969748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.969912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.969927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.970135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.970151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.970328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.970344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.970497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.970512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.970705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.970722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.970910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.970926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.971086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.971101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.971188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.971204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.971392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.971410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.971601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.971616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.971829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.971846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.972017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.972034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.972269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.972288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.972445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.972461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.972559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.972575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.972703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.972719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.972892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.972907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.973050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.973066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.973157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.973172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.973333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.973348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.973504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.973519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.973624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.973639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.973867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.973883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.974091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.974107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.974273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.974288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.974449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.974469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.974632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.974652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.974787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.974802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.974900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.974915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.975023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.975068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.975240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.975289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.975522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.975566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.975776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.975827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.975992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.014 [2024-12-09 11:15:19.976037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.014 qpair failed and we were unable to recover it. 01:04:19.014 [2024-12-09 11:15:19.976247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.976292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.976593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.976637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.976932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.976977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.977215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.977260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.977563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.977608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.977844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.977891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.978065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.978109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.978265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.978308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.978517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.978562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.978876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.978924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.979114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.979158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.979450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.979495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.979781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.979828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.979980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.980024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.980184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.980228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.980446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.980491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.980656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.980701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.980995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.981044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.981249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.981265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.981369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.981385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.981556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.981602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.981774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.981819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.982108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.982152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.982366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.982412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.982730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.982776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.983065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.983111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.983352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.983396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.983694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.983741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.983908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.983952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.984236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.984280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.984469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.984485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.984654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.984673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.984826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.984880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.985161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.985206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.015 [2024-12-09 11:15:19.985491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.015 [2024-12-09 11:15:19.985536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.015 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.985832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.985878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.986109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.986153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.986438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.986491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.986650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.986667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.986903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.986949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.987178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.987222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.987465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.987509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.987814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.987861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.988092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.988137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.988365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.988409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.988662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.988678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.988840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.988855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.988998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.989014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.989086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.989101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.989201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.989242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.989478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.989523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.989752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.989798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.989999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.990042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.990271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.990316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.990565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.990580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.990738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.990753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.990986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.991031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.991310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.991354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.991588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.991620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.991830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.991862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.991960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.992000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.992232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.992278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.992530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.992575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.992813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.992858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.993087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.993132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.993346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.993391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.993678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.993725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.994005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.994050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.994222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.994268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.994428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.994473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.994763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.994809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.995037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.995091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.016 [2024-12-09 11:15:19.995325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.016 [2024-12-09 11:15:19.995369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.016 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.995591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.995635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.995877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.995922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.996214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.996259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.996366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.996382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.996559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.996574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.996761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.996776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.996895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.996911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.997073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.997118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.997354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.997398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.997640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.997698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.997869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.997914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.998190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.998235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.998402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.998447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.998651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.998667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.998855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.998901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.999064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.999109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.999471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.999515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.999682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:19.999730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:19.999961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.000005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.000227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.000273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.000567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.000612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.000861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.000907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.001082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.001098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.001224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.001240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.001414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.001431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.001620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.001659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.001798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.001816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.001991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.002007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.002216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.002232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.002457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.002474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.002668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.002684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.002784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.002815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.002955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.002970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.003073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.003089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.003254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.003269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.003361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.003376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.003532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.003547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.003759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.003776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.003886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.017 [2024-12-09 11:15:20.003904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.017 qpair failed and we were unable to recover it. 01:04:19.017 [2024-12-09 11:15:20.003984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.003999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.004148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.004163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.004355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.004372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.004460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.004476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.004577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.004593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.004701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.004718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.004877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.004892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.004995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.005013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.005124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.005140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.005368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.005385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.005498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.005514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.005682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.005698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.005855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.005872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.005973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.005989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.006094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.006110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.006272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.006288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.006440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.006456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.006694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.006711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.006821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.006842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.006999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.007014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.007172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.007187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.007355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.007372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.007511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.007527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.007685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.007701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.007800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.007815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.008043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.008060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.008218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.008233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.008391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.008407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.008507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.008522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.008681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.008697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.008878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.008895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.009051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.009067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.009170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.009186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.009276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.009290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.009460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.009477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.018 [2024-12-09 11:15:20.009690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.018 [2024-12-09 11:15:20.009706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.018 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.009913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.009929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.010093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.010109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.010295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.010311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.010506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.010525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.010676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.010692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.010833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.010848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.010944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.010960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.011118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.011133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.011296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.011311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.011551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.011568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.011764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.011781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.011959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.011975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.012074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.012090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.012251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.012267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.012448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.012464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.012615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.012631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.012807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.012823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.012969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.012986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.013096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.013112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.013332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.013348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.013578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.013594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.013741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.013757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.013903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.013918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.014060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.014075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.014283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.014299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.014451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.014467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.019 [2024-12-09 11:15:20.014558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.019 [2024-12-09 11:15:20.014574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.019 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.014741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.014758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.014864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.014879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.014988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.015004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.015246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.015263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.015417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.015432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.015594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.015610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.015757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.015773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.015853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.015868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.016035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.016051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.016206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.016221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.016322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.016337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.016430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.016446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.016536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.016551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.016688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.016704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.016868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.016884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.017094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.017109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.017276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.017292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.017514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.017530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.017642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.017664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.017896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.017911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.018066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.018082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.018250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.018266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.020 qpair failed and we were unable to recover it. 01:04:19.020 [2024-12-09 11:15:20.018489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.020 [2024-12-09 11:15:20.018504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.018652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.018668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.018835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.018851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.018964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.018980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.019197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.019213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.019422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.019438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.019593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.019609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.019853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.019869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.020034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.020050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.020211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.020226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.020372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.020387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.020552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.020568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.020744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.020760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.020977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.020993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.021108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.021124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.021288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.021303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.021416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.021431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.021637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.021659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.021802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.021818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.021982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.021999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.022212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.022227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.022337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.022355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.022452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.022468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.022628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.022651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.022825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.022841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.023008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.023039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.023205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.023221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.021 [2024-12-09 11:15:20.023427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.021 [2024-12-09 11:15:20.023444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.021 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.023602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.023618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.023781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.023797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.024042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.024058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.024207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.024223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.024454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.024470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.024686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.024703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.024936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.024953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.025176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.025191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.025293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.025309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.025401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.025417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.025535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.025551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.025636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.025657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.025867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.025883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.025958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.025974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.026055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.026071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.026144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.026159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.026315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.026331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.026406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.026422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.026500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.026515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.026618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.026633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.026787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.026803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.026948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.026964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.027062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.027079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.027236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.027251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.027330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.027346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.027525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.027542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.027666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.022 [2024-12-09 11:15:20.027683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.022 qpair failed and we were unable to recover it. 01:04:19.022 [2024-12-09 11:15:20.027830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.027846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.027928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.027943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.028057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.028073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.028161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.028176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.028320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.028335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.028437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.028452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.028536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.028553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.028629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.028650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.028804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.028819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.028909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.028924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.029024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.029039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.029146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.029162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.029318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.029333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.029480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.029496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.029584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.029599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.029688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.029705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.029814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.029830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.029915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.029932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.030057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.030075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.030214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.030233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.030326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.030341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.030506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.030523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.030609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.030626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.030737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.030756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.030841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.030858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.031020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.031038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.031130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.023 [2024-12-09 11:15:20.031147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.023 qpair failed and we were unable to recover it. 01:04:19.023 [2024-12-09 11:15:20.031249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.031266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.031360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.031378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.031499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.031516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.031680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.031698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.031788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.031805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.031910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.031927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.032090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.032108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.032211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.032229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.032356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.032373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.032465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.032483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.032582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.032601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.032716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.032734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.032839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.032856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.032957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.032974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.033090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.033108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.033213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.033230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.033325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.033342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.033500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.033517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.033615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.033632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.033745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.033766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.033864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.033882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.033978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.033995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.034082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.034100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.034203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.034220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.034309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.034326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.034416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.034433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.034528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.034544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.024 qpair failed and we were unable to recover it. 01:04:19.024 [2024-12-09 11:15:20.034622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.024 [2024-12-09 11:15:20.034639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.034772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.034789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.034871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.034887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.034976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.034993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.035080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.035097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.035183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.035200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.035290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.035307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.035400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.035417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.035500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.035517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.035622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.035638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.035741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.035757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.035904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.035921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.036011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.036027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.036172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.036188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.036324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.036341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.036409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.036425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.036543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.036560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.036658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.036674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.036764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.036781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.036877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.036892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.037034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.037050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.037148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.037172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.037324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.037340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.037499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.037515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.037613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.037629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.037727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.037744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.037831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.037848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.025 qpair failed and we were unable to recover it. 01:04:19.025 [2024-12-09 11:15:20.037935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.025 [2024-12-09 11:15:20.037952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.038035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.038051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.038174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.038192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.038274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.038292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.038384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.038401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.038507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.038537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.038636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.038658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.038865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.038881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.038976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.038993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.039156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.039173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.039259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.039275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.039354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.039371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.039475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.039492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.039572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.039589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.039661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.039677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.039863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.039880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.040043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.040060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.040142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.040158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.040239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.040256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.040347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.040362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.040456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.040473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.040578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.040596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.040689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.040706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.040940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.040956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.041030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.041047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.041125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.041141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.026 qpair failed and we were unable to recover it. 01:04:19.026 [2024-12-09 11:15:20.041228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.026 [2024-12-09 11:15:20.041245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.041389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.041405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.041507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.041522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.041700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.041716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.041793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.041809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.041886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.041902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.041990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.042005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.042086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.042102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.042212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.042228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.042320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.042335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.042547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.042563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.042654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.042671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.042751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.042766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.042852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.042867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.042949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.042966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.043105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.043121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.043197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.043213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.043379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.043395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.043476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.043492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.043575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.043593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.043751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.043767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.043850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.043866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.043949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.043965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.044040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.044055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.044134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.044149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.044237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.044251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.044464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.044479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.044633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.027 [2024-12-09 11:15:20.044655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.027 qpair failed and we were unable to recover it. 01:04:19.027 [2024-12-09 11:15:20.044745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.044760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.044901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.044916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.045069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.045085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.045172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.045188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.045291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.045306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.045450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.045466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.045565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.045580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.045701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.045717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.045793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.045808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.045917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.045932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.046018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.046033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.046136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.046152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.046235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.046251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.046417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.046433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.046531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.046546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.046707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.046723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.046870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.046886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.046992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.047009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.047114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.047130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.047295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.047311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.047433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.047448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.047522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.047538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.047614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.047629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.047800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.047831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.047926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.047943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.048036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.048052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.048137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.048153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.048238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.048257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.048409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.048425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.028 [2024-12-09 11:15:20.048575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.028 [2024-12-09 11:15:20.048591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.028 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.048699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.048714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.048875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.048894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.048992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.049008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.049115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.049130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.049213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.049227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.049320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.049335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.049512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.049527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.049669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.049685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.049774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.049790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.049942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.049956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.050053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.050069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.050209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.050224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.050343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.050358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.050500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.050515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.050613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.050628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.050733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.050749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.050829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.050846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.050924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.050939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.051016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.051031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.051172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.051187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.051330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.051345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.051446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.051461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.051548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.051564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.051653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.051669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.051760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.051776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.051863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.051880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.051971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.051988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.052067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.052084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.052229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.052246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.029 qpair failed and we were unable to recover it. 01:04:19.029 [2024-12-09 11:15:20.052407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.029 [2024-12-09 11:15:20.052425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.052632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.052652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.052812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.052829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.052918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.052934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.053021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.053038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.053120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.053137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.053224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.053240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.053327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.053342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.053429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.053444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.053531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.053546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.053632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.053653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.053793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.053809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.053968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.053987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.054074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.054089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.054245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.054260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.054343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.054358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.054495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.054510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.054654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.054670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.054815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.054830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.054901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.054917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.055007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.055023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.055170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.055185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.055341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.055356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.055442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.055458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.055614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.055630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.055739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.055757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.055839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.055854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.055939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.055955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.056028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.056043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.030 [2024-12-09 11:15:20.056134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.030 [2024-12-09 11:15:20.056148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.030 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.056307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.056322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.056422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.056437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.056589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.056604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.056702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.056719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.056800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.056814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.056897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.056912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.057006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.057022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.057105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.057120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.057208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.057222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.057361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.057378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.057472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.057489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.057579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.057594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.057746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.057762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.057838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.057853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.057944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.057959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.058163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.058178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.058264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.058279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.058373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.058388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.058465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.058480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.058571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.058585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.058662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.058694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.058833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.058848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.059081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.059098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.059238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.059253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.059399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.059414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.059490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.059506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.059590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.059606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.059748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.059764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.059852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.059867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.059966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.059981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.060063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.060078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.060152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.031 [2024-12-09 11:15:20.060167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.031 qpair failed and we were unable to recover it. 01:04:19.031 [2024-12-09 11:15:20.060305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.060319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.060420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.060436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.060572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.060587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.060688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.060704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.060856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.060872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.060951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.060966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.061118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.061133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.061282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.061298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.061450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.061465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.061561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.061576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.061663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.061678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.061757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.061772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.061877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.061892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.062042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.062057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.062206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.062221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.062308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.062323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.062421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.062437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.062552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.062569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.062684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.062701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.062785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.062801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.062926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.062942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.063025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.063041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.063134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.063149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.063310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.063325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.063406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.063422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.063508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.063524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.063608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.032 [2024-12-09 11:15:20.063624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.032 qpair failed and we were unable to recover it. 01:04:19.032 [2024-12-09 11:15:20.063788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.063804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.063950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.063965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.064079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.064094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.064167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.064185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.064269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.064284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.064377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.064392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.064548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.064564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.064654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.064670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.064810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.064826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.064904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.064918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.065000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.065015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.065152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.065167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.065307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.065322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.065408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.065422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.065602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.065617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.065796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.065812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.065916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.065931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.066014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.066029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.066189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.066205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.066277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.066292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.066447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.066462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.066558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.066574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.066788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.066804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.033 [2024-12-09 11:15:20.066902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.033 [2024-12-09 11:15:20.066917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.033 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.067018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.067033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.067129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.067145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.067238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.067254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.067340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.067356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.067444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.067459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.067541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.067557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.067735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.067751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.067850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.067866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.067942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.067957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.068094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.068109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.068186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.068201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.068446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.068460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.068602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.068617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.068698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.068713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.068795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.068810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.068897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.068912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.069063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.069077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.069150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.069165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.069247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.069262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.069412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.069429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.069573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.069588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.069676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.069691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.069829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.069843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.069922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.069937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.070088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.070103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.070277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.070292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.070360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.070375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.070471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.070486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.070555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.070570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.070653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.070684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.070773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.070789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.070927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.070942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.071016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.071031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.071129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.071145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.071232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.071248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.071385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.071401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.071477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.071492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.071578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.071593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.034 [2024-12-09 11:15:20.071677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.034 [2024-12-09 11:15:20.071694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.034 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.071865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.071881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.071957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.071973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.072052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.072068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.072140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.072155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.072233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.072247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.072395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.072410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.072560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.072576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.072663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.072681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.072763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.072780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.072893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.072908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.072990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.073005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.073106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.073121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.073345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.073360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.073464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.073479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.073554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.073569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.073660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.073676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.073759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.073774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.073852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.073868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.073961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.073976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.074138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.074158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.074259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.074278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.074430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.074447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.074540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.074556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.074659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.074675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.074757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.074773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.074858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.074874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.074964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.074980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.075077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.075093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.075165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.075181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.075385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.075402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.075489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.075505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.075632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.075653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.075771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.075789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.075888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.075906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.076110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.076177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.076397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.076479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.076768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.076877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.077147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.077229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.035 [2024-12-09 11:15:20.077461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.035 [2024-12-09 11:15:20.077518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.035 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.077782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.077854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.078046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.078102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.078217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.078240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.078345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.078366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.078455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.078475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.078571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.078590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.078694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.078721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.078810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.078827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.078921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.078941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.079025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.079042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.079130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.079147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.079243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.079260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.079346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.079362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.079442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.079458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.079579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.079602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.079712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.079728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.079810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.079826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.079917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.079932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.080073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.080089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.080181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.080196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.080274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.080289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.080427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.080442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.080592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.080608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.080702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.080717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.080809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.080824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.080904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.080919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.081065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.081081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.081155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.081170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.081246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.081261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.081343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.081358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.081430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.081445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.081522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.081538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.081684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.081699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.081778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.081793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.081867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.081882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.082031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.082046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.082128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.082143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.082225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.082239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.036 [2024-12-09 11:15:20.082313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.036 [2024-12-09 11:15:20.082330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.036 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.082408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.082423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.082501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.082516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.082600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.082616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.082700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.082716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.082786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.082801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.082880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.082896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.083043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.083086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.083251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.083308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.083527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.083562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.083652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.083672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.083753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.083769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.083854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.083869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.083941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.083956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.084057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.084100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.084342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.084385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.084609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.084671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.084873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.084920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.085170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.085220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.085455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.085515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.085696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.085743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.085857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.085874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.086020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.086066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.086223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.086265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.086420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.086463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.086629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.086650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.086741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.086756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.086844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.086860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.086946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.086962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.087040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.087055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.087126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.087142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.087234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.087277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.087415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.087457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.087691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.037 [2024-12-09 11:15:20.087738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.037 qpair failed and we were unable to recover it. 01:04:19.037 [2024-12-09 11:15:20.087860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.087875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.087958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.087973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.088076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.088118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.088266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.088308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.088591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.088633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.088887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.088931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.089079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.089121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.089340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.089383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.089592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.089607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.089828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.089874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.090079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.090122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.090330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.090374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.090544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.090560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.090719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.090765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.091038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.091082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.091244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.091289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.091450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.091498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.091649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.091668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.091825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.091863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.092135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.092179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.092325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.092368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.092516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.092531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.092607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.092622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.092712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.092728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.092883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.092924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.093176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.093220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.093488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.093533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.093684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.093730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.093891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.093935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.094103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.094147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.094426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.094471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.094615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.094671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.094898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.094914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.095061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.095105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.095261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.095304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.095517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.095560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.095650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.095681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.095905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.095921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.096072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.038 [2024-12-09 11:15:20.096127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.038 qpair failed and we were unable to recover it. 01:04:19.038 [2024-12-09 11:15:20.096394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.096436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.096615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.096677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.096891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.096908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.097036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.097080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.097296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.097341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.097495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.097539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.097636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.097690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.097851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.097868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.097973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.097989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.098108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.098153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.098334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.098387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.098575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.098622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.098812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.098828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.098977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.098992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.099140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.099155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.099301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.099317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.099428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.099473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.099703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.099760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.099945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.099993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.100213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.100256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.100423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.100467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.100689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.100710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.100834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.100882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.101040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.101088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.101252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.101296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.101445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.101460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.101537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.101552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.101624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.101675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.101901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.101948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.102158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.102202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.102410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.102454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.102632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.102686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.102855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.102871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.103044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.103088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.103235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.103279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.103449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.103493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.103737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.103756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.103832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.103877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.104103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.104149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.039 [2024-12-09 11:15:20.104366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.039 [2024-12-09 11:15:20.104409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.039 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.104554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.104574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.104663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.104680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.104798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.104814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.104968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.104985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.105078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.105097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.105237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.105291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.105465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.105525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.105699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.105746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.105902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.105923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.106034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.106050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.106191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.106206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.106357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.106372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.106513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.106528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.106635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.106694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.106849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.106892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.107058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.107105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.107330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.107375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.107586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.107633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.107774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.107789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.107873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.107888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.107970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.107985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.108125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.108140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.108239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.108255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.108333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.108348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.108494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.108517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.108605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.108621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.108719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.108734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.108813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.108828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.108916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.108960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.109196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.109240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.109459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.109505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.109705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.109765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.109928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.109974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.110146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.110190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.110449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.110471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.110631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.110657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.110830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.110847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.111013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.111057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.040 [2024-12-09 11:15:20.111218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.040 [2024-12-09 11:15:20.111262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.040 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.111407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.111452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.111604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.111658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.111802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.111846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.112006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.112052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.112220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.112267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.112458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.112525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.112634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.112655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.112810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.112855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.113019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.113064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.113218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.113262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.113503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.113526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.113610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.113627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.113737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.113756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.113841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.113858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.113971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.114015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.114165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.114207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.114372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.114416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.114567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.114582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.114656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.114687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.114892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.114907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.115011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.115055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.115209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.115253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.115414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.115464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.115707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.115730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.115840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.115887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.116042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.116088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.116307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.116351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.116448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.116463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.116569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.116583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.116738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.116755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.116904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.116947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.117085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.117129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.117406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.117455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.117561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.117576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.117651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.117682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.117779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.117822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.117971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.118013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.118187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.118229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.118369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.041 [2024-12-09 11:15:20.118412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.041 qpair failed and we were unable to recover it. 01:04:19.041 [2024-12-09 11:15:20.118579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.118626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.118790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.118846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.118943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.118960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.119098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.119112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.119198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.119213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.119299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.119313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.119385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.119399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.119537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.119552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.119642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.119720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.119869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.119913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.120063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.120106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.120247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.120290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.120537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.120583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.120761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.120808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.121041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.121088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.121246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.121290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.121454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.121511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.121687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.121704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.121860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.121903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.122115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.122159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.122385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.122429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.122584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.122628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.122858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.122902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.123128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.123172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.123320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.123363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.123542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.123556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.123664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.123696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.123778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.123793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.123936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.123968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.124114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.124157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.124322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.124368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.124586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.124632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.124814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.124861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.125077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.125135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.042 qpair failed and we were unable to recover it. 01:04:19.042 [2024-12-09 11:15:20.125348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.042 [2024-12-09 11:15:20.125391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.125631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.125686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.125839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.125854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.125955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.125970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.126089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.126104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.126199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.126213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.126359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.126374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.126459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.126473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.126612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.126627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.126729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.126745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.126822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.126837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.126976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.126991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.127081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.127096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.127205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.127220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.127314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.127328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.127412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.127427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.127564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.127578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.127720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.127735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.127869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.127884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.127976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.127991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.128082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.128098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.128258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.043 [2024-12-09 11:15:20.128273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.043 qpair failed and we were unable to recover it. 01:04:19.043 [2024-12-09 11:15:20.128344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.341 [2024-12-09 11:15:20.128360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.341 qpair failed and we were unable to recover it. 01:04:19.341 [2024-12-09 11:15:20.128468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.341 [2024-12-09 11:15:20.128484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.341 qpair failed and we were unable to recover it. 01:04:19.341 [2024-12-09 11:15:20.128556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.341 [2024-12-09 11:15:20.128572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.128675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.128721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.128986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.129034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.129262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.129306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.129462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.129505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.129665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.129711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.129986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.130007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.130177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.130195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.130307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.130323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.130429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.130445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.130589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.130604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.130689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.130705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.130849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.130864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.131022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.131037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.131118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.131133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.131280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.131297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.131468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.131482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.131577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.131592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.131675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.131691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.131850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.131893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.132104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.132148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.132303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.132346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.132608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.132663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.132913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.132956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.133155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.133198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.133370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.133426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.133635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.133659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.133755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.133771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.133911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.133957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.134118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.134166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.134338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.134384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.134597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.134641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.134801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.134847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.135003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.135046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.135258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.135302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.135473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.135517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.135736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.135781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.136035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.136079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.136228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.342 [2024-12-09 11:15:20.136271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.342 qpair failed and we were unable to recover it. 01:04:19.342 [2024-12-09 11:15:20.136409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.136453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.136687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.136734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.136874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.136890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.137035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.137053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.137154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.137168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.137276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.137291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.137371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.137386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.137530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.137545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.137682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.137697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.137861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.137904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.138066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.138110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.138261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.138304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.138523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.138566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.138878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.138931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.139145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.139161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.139254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.139268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.139417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.139432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.139518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.139533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.139620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.139635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.139747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.139763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.139850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.139864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.139934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.139949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.140034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.140048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.140254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.140297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.140457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.140501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.140661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.140701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.140773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.140787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.140944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.140979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.141134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.141177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.141328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.141371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.141588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.141633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.141708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.141724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.141794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.141809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.141876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.343 [2024-12-09 11:15:20.141891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.343 qpair failed and we were unable to recover it. 01:04:19.343 [2024-12-09 11:15:20.142027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.142041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.142112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.142127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.142211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.142226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.142431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.142474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.142627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.142684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.142911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.142963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.143179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.143225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.143391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.143434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.143580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.143594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.143690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.143707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.143859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.143902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.144051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.144094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.144257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.144300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.144439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.144483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.144688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.144703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.144856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.144871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.144974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.144988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.145063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.145078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.145154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.145168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.145321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.145364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.145512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.145556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.145714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.145760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.145923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.145969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.146123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.146170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.146347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.146391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.146662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.146713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.146871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.146887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.147043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.147062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.147145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.147159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.147296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.147312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.147472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.147513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.147684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.147731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.147958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.148004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.148155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.148200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.148374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.148421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.148573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.148588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.148669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.148684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.148771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.148786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.344 [2024-12-09 11:15:20.148876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.344 [2024-12-09 11:15:20.148891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.344 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.148969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.149026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.149197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.149258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.149471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.149517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.149682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.149704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.149802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.149818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.149896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.149943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.150087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.150130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.150286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.150329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.150476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.150521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.150655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.150687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.150838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.150855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.150989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.151004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.151098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.151142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.151351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.151396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.151636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.151703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.151776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.151791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.151883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.151897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.152002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.152045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.152259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.152303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.152524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.152562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.152654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.152685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.152816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.152866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.153023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.153067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.153259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.153305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.153481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.153525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.153683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.153729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.153939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.153985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.154271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.154314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.154497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.154540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.154736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.154752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.154832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.154848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.154943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.154987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.155217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.155264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.155483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.155529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.155660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.155693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.155780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.155796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.345 qpair failed and we were unable to recover it. 01:04:19.345 [2024-12-09 11:15:20.155864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.345 [2024-12-09 11:15:20.155878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.155966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.155983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.156081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.156125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.156280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.156325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.156537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.156584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.156812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.156829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.156916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.156931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.157018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.157033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.157169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.157184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.157262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.157276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.157377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.157391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.157461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.157496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.157721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.157767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.157998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.158044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.158203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.158249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.158410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.158454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.158667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.158713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.158858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.158901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.158996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.159013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.159121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.159137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.159217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.159245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.159330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.159344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.159422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.159437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.159517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.159531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.159630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.159688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.159906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.159950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.160093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.160137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.160353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.160397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.160562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.160605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.160756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.160772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.160882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.346 [2024-12-09 11:15:20.160897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.346 qpair failed and we were unable to recover it. 01:04:19.346 [2024-12-09 11:15:20.160969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.160984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.161119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.161133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.161207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.161221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.161319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.161334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.161486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.161500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.161581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.161595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.161686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.161701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.161787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.161801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.161880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.161895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.161986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.162001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.162085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.162102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.162185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.162200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.162349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.162364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.162474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.162518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.162670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.162717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.162871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.162917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.163070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.163117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.163265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.163308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.163462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.163505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.163661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.163704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.163960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.163974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.164118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.164161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.164381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.164425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.164565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.164612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.164702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.164717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.164900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.164933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.165160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.165207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.165375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.165421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.165554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.165570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.165670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.165685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.165823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.165838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.165932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.165947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.166032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.166047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.166198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.166213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.166363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.166378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.166488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.166532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.166688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.166733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.166889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.166939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.167139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.167185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.347 qpair failed and we were unable to recover it. 01:04:19.347 [2024-12-09 11:15:20.167429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.347 [2024-12-09 11:15:20.167475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.167641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.167701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.167849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.167864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.167952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.167967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.168050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.168065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.168135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.168150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.168309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.168352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.168505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.168549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.168757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.168808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.168964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.168978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.169087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.169130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.169283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.169332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.169489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.169533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.169670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.169721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.169821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.169836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.169931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.169945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.170093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.170107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.170188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.170226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.170439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.170485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.170724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.170763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.170863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.170878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.171088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.171131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.171285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.171329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.171621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.171683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.171861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.171907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.172069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.172117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.172337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.172383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.172597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.172643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.172811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.172868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.173024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.173038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.173114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.173143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.173359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.173403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.173631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.173688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.173896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.348 [2024-12-09 11:15:20.173911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.348 qpair failed and we were unable to recover it. 01:04:19.348 [2024-12-09 11:15:20.174069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.174085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.174164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.174179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.174254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.174269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.174417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.174432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.174640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.174703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.174859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.174903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.175072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.175116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.175277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.175321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.175549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.175596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.175845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.175892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.176106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.176152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.176310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.176356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.176511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.176553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.176760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.176806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.176957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.176971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.177111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.177127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.177247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.177262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.177410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.177461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.177620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.177674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.177832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.177876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.178071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.178086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.178235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.178279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.178539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.178585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.178808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.178855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.179016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.179031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.179114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.179128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.179221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.179264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.179492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.179537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.179700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.179747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.179912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.179934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.180047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.180063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.180206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.180222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.180391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.180435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.180596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.180640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.180864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.180911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.181090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.181105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.181222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.181264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.181484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.181527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.181676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.349 [2024-12-09 11:15:20.181720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.349 qpair failed and we were unable to recover it. 01:04:19.349 [2024-12-09 11:15:20.181879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.181894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.182045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.182061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.182204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.182218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.182363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.182378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.182516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.182531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.182686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.182738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.182893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.182939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.183113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.183156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.183385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.183429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.183658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.183704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.183921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.183966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.184185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.184206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.184301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.184317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.184494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.184510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.184621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.184638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.184725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.184740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.184836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.184851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.184938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.184953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.185089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.185104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.185205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.185263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.185429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.185475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.185625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.185683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.185972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.185989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.186098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.186115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.186293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.186336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.186485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.186528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.186732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.186778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.186924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.186939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.187089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.187128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.187344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.187388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.187631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.187696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.187824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.187839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.187926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.187943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.188029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.188044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.188189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.188205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.188339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.188353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.188446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.188462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.188594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.188608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.188690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.188706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.350 qpair failed and we were unable to recover it. 01:04:19.350 [2024-12-09 11:15:20.188856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.350 [2024-12-09 11:15:20.188871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.188965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.188980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.189073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.189088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.189167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.189182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.189274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.189317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.189487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.189532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.189691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.189744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.189861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.189877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.189957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.189972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.190069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.190113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.190273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.190320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.190475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.190520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.190685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.190701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.190807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.190822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.190916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.190959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.191173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.191217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.191420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.191463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.191613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.191628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.191744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.191761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.191895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.191909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.191999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.192013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.192094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.192108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.192185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.192200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.192290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.192306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.192447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.192462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.192551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.192600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.192764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.192811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.192961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.193008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.193183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.193230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.193383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.193429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.193583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.193628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.193794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.193839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.194016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.194053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.194210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.351 [2024-12-09 11:15:20.194226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.351 qpair failed and we were unable to recover it. 01:04:19.351 [2024-12-09 11:15:20.194368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.194383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.194473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.194487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.194565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.194612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.194838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.194885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.195042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.195088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.195239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.195286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.195502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.195550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.195710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.195732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.195832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.195849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.195936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.195980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.196130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.196174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.196340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.196385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.196597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.196642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.196791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.196806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.196891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.196906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.197067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.197110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.197356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.197401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.197621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.197678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.197896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.197943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.198107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.198153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.198362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.198405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.198555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.198599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.198763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.198809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.198922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.198938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.199046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.199062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.199162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.199178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.199352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.199367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.199517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.199533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.199615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.199630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.199728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.199743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.199831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.199846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.199933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.199947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.200038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.200052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.200134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.200149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.200316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.200359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.200525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.200578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.200798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.200844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.200954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.200970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.201074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.352 [2024-12-09 11:15:20.201089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.352 qpair failed and we were unable to recover it. 01:04:19.352 [2024-12-09 11:15:20.201229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.201246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.201386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.201430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.201717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.201764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.201918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.201941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.202034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.202051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.202148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.202193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.202359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.202405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.202636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.202696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.202820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.202836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.202978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.203024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.203175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.203219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.203420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.203463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.203675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.203720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.203928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.203972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.204117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.204140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.204246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.204262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.204429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.204445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.204532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.204547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.204627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.204642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.204718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.204733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.204847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.204892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.205045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.205091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.205269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.205315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.205464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.205509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.205668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.205714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.205952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.205996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.206203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.206246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.206474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.206521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.206675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.206691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.206789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.206804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2557995 Killed "${NVMF_APP[@]}" "$@" 01:04:19.353 [2024-12-09 11:15:20.206892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.206909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.207050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.207065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.207155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.207170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.207263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.207278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.207359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.207374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.207511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.207526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.207599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.207614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.207704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.207720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.353 [2024-12-09 11:15:20.207807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.353 [2024-12-09 11:15:20.207822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.353 qpair failed and we were unable to recover it. 01:04:19.354 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 01:04:19.354 [2024-12-09 11:15:20.207895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.207913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.207994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.208009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.208095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.208109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.208200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.208215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 01:04:19.354 [2024-12-09 11:15:20.208288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.208302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.208459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.208474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.208563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.208578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:04:19.354 [2024-12-09 11:15:20.208663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.208679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.208763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.208778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.208860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.208875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 01:04:19.354 [2024-12-09 11:15:20.208968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.208984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.209062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.209077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.209213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:19.354 [2024-12-09 11:15:20.209233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.209320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.209335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.209470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.209485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.209567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.209582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.209671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.209687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.209760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.209776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.209925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.209940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.210034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.210049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.210133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.210148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.210236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.210251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.210325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.210340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.210471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.210485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.210578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.210593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.210688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.210705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.210777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.210791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.210935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.210950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.211095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.211110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.211245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.211260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.211340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.211355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.211492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.211507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.211586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.211601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.211737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.211752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.211826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.211841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.211920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.211935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.212007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.354 [2024-12-09 11:15:20.212022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.354 qpair failed and we were unable to recover it. 01:04:19.354 [2024-12-09 11:15:20.212094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.212109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.212179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.212193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.212281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.212296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.212447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.212462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.212568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.212582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.212672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.212685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.212830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.212843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.212914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.212928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.213077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.213091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.213168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.213182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.213271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.213286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.213371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.213386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.213474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.213489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.213629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.213647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.213736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.213751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.213838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.213861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.213949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.213966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.214048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.214065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.214159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.214175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.214249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.214263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.214354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.214369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.214442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.214457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.214546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.214577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.214661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.214677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.214752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.214767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.214839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.214854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.214929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.214944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.215017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.215032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.215122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.215139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.215224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.215239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.215313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.215328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.215401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.215415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.215507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.215522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.215738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.215754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.355 qpair failed and we were unable to recover it. 01:04:19.355 [2024-12-09 11:15:20.215963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.355 [2024-12-09 11:15:20.215978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.216069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.216084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.216155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.216170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.216240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.216255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.216351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.216366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.216458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.216472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.216556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.216571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.216652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.216668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.216757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.216773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.216867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.216882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.217021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.217036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.217234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.217249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.217320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.217335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.217417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.217432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.217505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.217519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.217583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.217598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.217682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.217698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.217782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.217797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.217882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.217897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2558566 01:04:19.356 [2024-12-09 11:15:20.218051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.218066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.218143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.218158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.218238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.218254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.218327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.218343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2558566 01:04:19.356 [2024-12-09 11:15:20.218419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 01:04:19.356 [2024-12-09 11:15:20.218434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.218512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.218527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.218611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.218626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.218854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.218869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.219003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2558566 ']' 01:04:19.356 [2024-12-09 11:15:20.219018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.219215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.219230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.219305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.219320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.219391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.219406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.219560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.219575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:19.356 [2024-12-09 11:15:20.219675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.219691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.219782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.219796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:19.356 [2024-12-09 11:15:20.219953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:19.356 [2024-12-09 11:15:20.219968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.356 [2024-12-09 11:15:20.220152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.356 [2024-12-09 11:15:20.220167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.356 qpair failed and we were unable to recover it. 01:04:19.357 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:19.357 [2024-12-09 11:15:20.220273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.220289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.220386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.220401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.220494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.220510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:19.357 [2024-12-09 11:15:20.220586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.220602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.220709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.220724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.220855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.220870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.220976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.220991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.221084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.221101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.221196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.221212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.221318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.221333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.221469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.221484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.221655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.221670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.221760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.221775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.221868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.221882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.221975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.221989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.222075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.222089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.222187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.222202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.222284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.222299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.222389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.222404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.222496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.222511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.222594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.222628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.222725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.222742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.222829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.222843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.222913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.222928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.223022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.223038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.223117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.223132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.223220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.223237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.223314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.223331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.223429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.223445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.223545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.223561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.223640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.223661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.223738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.223753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.223843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.223858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.223999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.224014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.224093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.224106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.224202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.224217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.224356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.224371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.224449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.357 [2024-12-09 11:15:20.224465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.357 qpair failed and we were unable to recover it. 01:04:19.357 [2024-12-09 11:15:20.224614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.224629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.224730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.224746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.224819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.224833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.224917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.224932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.225016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.225030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.225114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.225129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.225219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.225234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.225337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.225353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.225442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.225458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.225542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.225561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.225766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.225783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.225872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.225900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.225979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.225993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.226084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.226100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.226174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.226189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.226270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.226287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.226368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.226384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.226465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.226481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.226569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.226600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.226801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.226820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.226898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.226913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.227003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.227020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.227127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.227145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.227258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.227275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.227362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.227378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.227455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.227471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.227570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.227586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.227677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.227693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.227775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.227803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.227884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.227898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.228113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.228127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.228213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.228228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.228378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.228393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.228525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.228540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.228620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.228635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.228790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.228804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.228904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.228919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.229004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.358 [2024-12-09 11:15:20.229019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.358 qpair failed and we were unable to recover it. 01:04:19.358 [2024-12-09 11:15:20.229102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.229117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.229190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.229205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.229280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.229293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.229379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.229393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.229473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.229486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.229585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.229600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.229684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.229699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.229774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.229789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.229870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.229885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.229964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.229979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.230051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.230066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.230210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.230226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.230306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.230322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.230409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.230424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.230575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.230590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.230668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.230683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.230770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.230785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.230858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.230873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.230946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.230961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.231034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.231048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.231122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.231137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.231222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.231237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.231385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.231400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.231541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.231555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.231666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.231689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.231781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.231797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.231885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.231900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.231972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.231988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.232122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.232138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.232239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.232256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.232337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.232353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.232486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.232502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.232577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.232593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.232731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.232748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.232832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.232859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.232947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.232961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.233105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.233120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.233204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.359 [2024-12-09 11:15:20.233219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.359 qpair failed and we were unable to recover it. 01:04:19.359 [2024-12-09 11:15:20.233302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.233316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.233393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.233408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.233603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.233617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.233773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.233788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.233875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.233889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.234025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.234040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.234134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.234149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.234235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.234250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.234323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.234338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.234472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.234487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.234699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.234715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.234792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.234806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.234880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.234895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.234980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.234996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.235084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.235107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.235207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.235228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.235428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.235443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.235531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.235546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.235628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.235647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.235736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.235751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.235835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.235851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.235937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.235952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.236038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.236053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.236198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.236219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.236298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.236314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.236448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.236463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.236548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.236566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.236700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.236715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.236791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.236806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.236880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.236896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.236992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.237009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.237091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.237105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.237195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.237211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.237305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.360 [2024-12-09 11:15:20.237320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.360 qpair failed and we were unable to recover it. 01:04:19.360 [2024-12-09 11:15:20.237402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.237417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.237556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.237570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.237648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.237663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.237748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.237763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.237875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.237890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.238029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.238044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.238129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.238143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.238212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.238227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.238304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.238319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.238407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.238421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.238488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.238503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.238642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.238664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.238799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.238814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.238889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.238903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.238971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.238986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.239067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.239081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.239162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.239177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.239256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.239270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.239353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.239367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.239449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.239466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.239607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.239621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.239715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.239730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.239830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.239844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.239977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.239991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.240076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.240090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.240159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.240173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.240263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.240278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.240361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.240375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.240447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.240462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.240536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.240550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.240630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.240647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.240781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.240796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.240868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.240882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.241016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.241032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.241113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.241128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.241197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.241211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.241347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.241361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.241439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.241454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.361 [2024-12-09 11:15:20.241536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.361 [2024-12-09 11:15:20.241550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.361 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.241714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.241730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.241870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.241884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.241961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.241976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.242047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.242061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.242135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.242149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.242226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.242241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.242314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.242328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.242463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.242477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.242567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.242581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.242663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.242677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.242806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.242821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.242958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.242972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.243054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.243068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.243145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.243159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.243228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.243244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.243329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.243343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.243423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.243437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.243516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.243531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.243602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.243616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.243706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.243721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.243855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.243871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.243951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.243966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.244978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.244993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.245064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.245078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.245166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.245181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.245316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.245331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.245408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.245422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.245497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.362 [2024-12-09 11:15:20.245512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.362 qpair failed and we were unable to recover it. 01:04:19.362 [2024-12-09 11:15:20.245587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.245601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.245739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.245755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.245891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.245905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.246007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.246022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.246094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.246108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.246191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.246205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.246290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.246305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.246372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.246387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.246465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.246479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.246547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.246563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.246702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.246717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.246786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.246801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.246951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.246965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.247033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.247048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.247123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.247138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.247212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.247226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.247312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.247327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.247460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.247475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.247617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.247632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.247707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.247721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.247805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.247820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.247894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.247908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.248000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.248016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.248156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.248171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.248317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.248332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.248402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.248417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.248488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.248502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.248589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.248604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.248676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.248691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.248793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.248807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.248898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.248913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.248987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.249001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.249082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.249096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.249167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.249182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.249270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.249285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.249368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.249383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.249477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.249492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.249581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.249596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.363 [2024-12-09 11:15:20.249670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.363 [2024-12-09 11:15:20.249684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.363 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.249768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.249783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.249858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.249873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.249948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.249962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.250032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.250047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.250133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.250148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.250236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.250250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.250394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.250409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.250508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.250523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.250604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.250618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.250714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.250729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.250812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.250826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.250907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.250922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.251002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.251016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.251089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.251103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.251177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.251192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.251341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.251356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.251432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.251447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.251530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.251544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.251679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.251694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.251766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.251780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.251861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.251875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.251961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.251976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.252053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.252067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.252201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.252219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.252321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.252336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.252412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.252426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.252514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.252529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.252610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.252624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.252710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.252726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.252858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.252873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.252976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.252990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.253068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.253082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.253154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.253169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.253299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.364 [2024-12-09 11:15:20.253315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.364 qpair failed and we were unable to recover it. 01:04:19.364 [2024-12-09 11:15:20.253405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.253419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.253492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.253508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.253594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.253608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.253690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.253705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.253784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.253799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.253880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.253894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.253971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.253985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.254079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.254093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.254228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.254242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.254312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.254327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.254418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.254432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.254512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.254528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.254614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.254629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.254712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.254729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.254814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.254829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.254902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.254916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.255015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.255030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.255106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.255121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.255320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.255335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.255409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.255424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.255497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.255512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.255657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.255672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.255751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.255766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.255904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.255918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.256007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.256022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.256107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.256121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.256210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.256225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.256314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.256329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.256403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.256418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.256579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.256596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.256673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.256699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.256780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.256795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.256867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.256882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.256961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.256975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.257043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.257057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.257128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.257143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.257280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.257295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.257426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.257441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.365 [2024-12-09 11:15:20.257576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.365 [2024-12-09 11:15:20.257590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.365 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.257672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.257687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.257779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.257794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.257898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.257913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.258049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.258064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.258158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.258173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.258260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.258276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.258412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.258427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.258500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.258514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.258593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.258608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.258685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.258700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.258775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.258790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.258863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.258878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.258951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.258965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.259112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.259127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.259200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.259214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.259306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.259321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.259398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.259412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.259507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.259522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.259595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.259610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.259743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.259758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.259850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.259865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.259950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.259964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.260053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.260067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.260209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.260223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.260298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.260312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.260386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.260402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.260487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.260502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.260580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.260596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.260680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.260695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.260830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.260844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.260930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.260950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.261025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.261040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.261120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.261135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.261285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.261300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.261384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.261399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.261490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.261505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.261581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.261596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.261674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.261689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.261779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.366 [2024-12-09 11:15:20.261795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.366 qpair failed and we were unable to recover it. 01:04:19.366 [2024-12-09 11:15:20.261874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.261889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.261966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.261981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.262055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.262069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.262152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.262167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.262245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.262261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.262360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.262375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.262450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.262465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.262610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.262625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.262700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.262715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.262787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.262802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.262872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.262887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.262959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.262974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.263124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.263139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.263288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.263303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.263390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.263405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.263477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.263492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.263583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.263598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.263749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.263764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.263844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.263859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.264015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.264030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.264122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.264137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.264218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.264233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.264325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.264340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.264431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.264447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.264600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.264615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.264705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.264721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.264794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.264809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.264956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.264971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.265065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.265080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.265163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.265179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.265324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.265339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.265474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.265490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.265560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.265575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.265659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.265675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.265755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.265771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.265846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.265860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.265951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.265966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.266067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.266083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.367 [2024-12-09 11:15:20.266168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.367 [2024-12-09 11:15:20.266182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.367 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.266253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.266268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.266346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.266361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.266436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.266450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.266521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.266537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.266611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.266626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.266722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.266737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.266821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.266836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.266928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.266943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.267076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.267091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.267165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.267181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.267322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.267338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.267407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.267422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.267512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.267527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.267596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.267610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.267763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.267779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.267873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.267889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.268037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.268053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.268136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.268151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.268234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.268249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.268326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.268341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.268495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.268510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.268595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.268611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.268690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.268706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.268776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.268791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.268879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.268893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.269025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.269040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.269178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.269193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.269331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.269346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.269425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.269440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.269525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.269540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.269688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.269704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.269786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.269801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.269874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.269891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.269969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.269984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.270062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.270076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.270151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.270166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.270304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.270319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.270388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.270403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.270552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.368 [2024-12-09 11:15:20.270567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.368 qpair failed and we were unable to recover it. 01:04:19.368 [2024-12-09 11:15:20.270640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.270660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.270743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.270758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.270845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.270861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.270945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.270960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.271043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.271058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.271135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.271150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.271284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.271300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.271456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.271471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.271558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.271572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.271662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.271677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.271749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.271763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.271845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.271859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.271928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.271943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.272032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.272047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.272146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.272161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.272249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.272264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.272340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.272355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.272551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.272566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.272636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.272657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.272755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.272769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.272862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.272877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.272945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.272960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.273043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.273059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.273138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.273154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.273298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.273313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.273437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.273452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.273542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.273557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.273692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.273708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.273793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.273808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.273898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.273913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.274003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.274018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.274103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.274118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.274201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.274216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.274292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.369 [2024-12-09 11:15:20.274309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.369 qpair failed and we were unable to recover it. 01:04:19.369 [2024-12-09 11:15:20.274447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.274463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.274598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.274613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.274759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.274775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.274866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.274881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.275024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.275039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.275140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.275155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.275228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.275243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.275326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.275341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.275434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.275449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.275600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.275615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.275694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.275709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.275805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.275820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.275899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.275914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.275992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.276007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.276081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.276096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.276175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.276190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.276281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.276296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.276375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.276389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.276525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.276540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.276617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.276631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.276718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.276733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.276808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.276823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.276922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.276937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.277012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.277026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.277097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.277111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.277249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.277264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.277351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.277372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.277458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.277477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.277560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.277575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.277660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.277675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.277758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.277773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.277847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.277861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.278018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.278033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.278117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.278131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.278221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.278236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.278316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.278331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.278415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.278429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.278512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.278526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.370 [2024-12-09 11:15:20.278555] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:04:19.370 [2024-12-09 11:15:20.278598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.370 [2024-12-09 11:15:20.278612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.370 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.278620] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:19.371 [2024-12-09 11:15:20.278698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.278713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.278791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.278805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.278887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.278900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.278981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.278993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.279098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.279111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.279189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.279202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.279267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.279279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.279418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.279431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.279506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.279521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.279598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.279612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.279750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.279763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.279893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.279908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.279986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.280000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.280087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.280102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.280172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.280186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.280277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.280292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.280437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.280453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.280540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.280555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.280624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.280640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.280724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.280738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.280808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.280823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.280976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.280991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.281066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.281081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.281170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.281186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.281265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.281279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.281353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.281369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.281466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.281480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.281558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.281573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.281706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.281722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.281858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.281873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.282005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.282020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.282107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.282122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.282198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.282213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.282283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.282298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.282433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.282448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.282598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.282613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.282704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.282719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.282858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.371 [2024-12-09 11:15:20.282873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.371 qpair failed and we were unable to recover it. 01:04:19.371 [2024-12-09 11:15:20.282969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.282983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.283073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.283090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.283171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.283187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.283264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.283278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.283356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.283371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.283464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.283479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.283631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.283649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.283725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.283740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.283822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.283837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.283970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.283985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.284064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.284079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.284156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.284171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.284241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.284255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.284337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.284352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.284488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.284503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.284587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.284602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.284740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.284755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.284839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.284854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.284986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.285000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.285164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.285179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.285251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.285265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.285336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.285352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.285424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.285439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.285517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.285531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.285605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.285620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.285700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.285715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.285786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.285800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.285880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.285896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.286973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.372 [2024-12-09 11:15:20.286987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.372 qpair failed and we were unable to recover it. 01:04:19.372 [2024-12-09 11:15:20.287060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.287074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.287163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.287178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.287260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.287277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.287457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.287471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.287604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.287620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.287718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.287733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.287818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.287832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.287968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.287982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.288116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.288131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.288207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.288222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.288294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.288308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.288396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.288410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.288485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.288500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.288586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.288601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.288670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.288685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.288834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.288850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.288928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.288942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.289013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.289027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.289112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.289127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.289198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.289212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.289293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.289308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.289394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.289409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.289604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.289619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.289696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.289710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.289785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.289800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.289883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.289898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.289979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.289993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.290228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.290244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.290317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.290332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.290480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.290494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.290597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.290612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.290698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.290712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.290795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.290809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.290898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.290913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.373 [2024-12-09 11:15:20.290997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.373 [2024-12-09 11:15:20.291011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.373 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.291102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.291118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.291188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.291203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.291293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.291308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.291401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.291416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.291550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.291565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.291652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.291667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.291802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.291816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.291887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.291904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.292035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.292050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.292124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.292139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.292275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.292289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.292433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.292449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.292515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.292530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.292630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.292649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.292784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.292799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.292898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.292913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.292993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.293007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.293091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.293107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.293188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.293203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.293421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.293436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.293533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.293547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.293629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.293648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.293743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.293757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.293834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.293849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.293922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.293936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.294010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.294025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.294112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.294126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.294214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.294228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.294367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.294382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.294522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.294536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.294629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.294658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.294741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.294756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.294826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.294840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.294915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.294930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.295002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.295017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.295108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.295123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.295217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.295233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.295311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.295326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.374 [2024-12-09 11:15:20.295455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.374 [2024-12-09 11:15:20.295470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.374 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.295548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.295563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.295632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.295653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.295737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.295752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.295821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.295836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.295912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.295927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.296059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.296074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.296167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.296181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.296311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.296325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.296402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.296419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.296558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.296572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.296642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.296663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.296799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.296814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.296906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.296920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.296992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.297007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.297074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.297089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.297167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.297181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.297253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.297270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.297351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.297365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.297439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.297453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.297591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.297605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.297697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.297713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.297855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.297869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.298013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.298028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.298162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.298177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.298250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.298266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.298352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.298367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.298450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.298465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.298536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.298551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.298634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.298654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.298761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.298777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.298860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.298874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.298956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.298970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.299039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.299054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.299118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.299133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.299217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.299232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.299433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.299449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.299523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.299538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.299611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.299626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.299715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.375 [2024-12-09 11:15:20.299730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.375 qpair failed and we were unable to recover it. 01:04:19.375 [2024-12-09 11:15:20.299805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.299821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.299893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.299908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.300050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.300066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.300158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.300172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.300251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.300265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.300400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.300415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.300551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.300566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.300649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.300664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.300816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.300832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.300919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.300936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.301012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.301027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.301104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.301119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.301183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.301198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.301396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.301412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.301501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.301516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.301606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.301620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.301713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.301729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.301805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.301820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.301899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.301913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.302020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.302036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.302190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.302204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.302281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.302296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.302450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.302464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.302605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.302621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.302725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.302740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.302810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.302825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.302971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.302985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.303075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.303090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.303166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.303180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.303253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.303268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.303342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.303357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.303425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.303439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.303583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.303598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.303677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.303693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.303782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.303797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.303879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.303894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.303976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.303991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.304122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.304137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.304209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.376 [2024-12-09 11:15:20.304224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.376 qpair failed and we were unable to recover it. 01:04:19.376 [2024-12-09 11:15:20.304315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.304330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.304434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.304448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.304524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.304538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.304630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.304650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.304788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.304803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.304890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.304905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.305040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.305055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.305127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.305142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.305279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.305294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.305373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.305388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.305460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.305477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.305556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.305570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.305707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.305722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.305797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.305812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.305883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.305898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.305981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.305996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.306157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.306173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.306269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.306283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.306428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.306443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.306523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.306538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.306624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.306639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.306729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.306744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.306824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.306839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.306913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.306928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.307032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.307047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.307120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.307134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.307203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.307219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.307352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.307367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.307444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.307459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.307563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.307577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.307724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.307741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.307818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.307832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.307911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.307925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.308083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.308098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.308195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.308210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.377 [2024-12-09 11:15:20.308305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.377 [2024-12-09 11:15:20.308320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.377 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.308408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.308423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.308517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.308540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.308684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.308700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.308785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.308800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.308878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.308894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.308980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.308994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.309071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.309085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.309222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.309237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.309330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.309345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.309419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.309434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.309634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.309653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.309787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.309802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.309947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.309962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.310047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.310062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.310148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.310167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.310259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.310274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.310351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.310366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.310501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.310516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.310589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.310604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.310742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.310757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.310902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.310917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.311000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.311015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.311100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.311115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.311200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.311216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.311284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.311299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.311385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.311400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.311472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.311487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.311624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.311639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.311781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.311796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.311889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.311904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.312055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.312070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.312142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.312156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.312290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.312305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.312406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.312421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.312504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.378 [2024-12-09 11:15:20.312518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.378 qpair failed and we were unable to recover it. 01:04:19.378 [2024-12-09 11:15:20.312658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.312673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.312868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.312882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.312956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.312971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.313059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.313074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.313143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.313159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.313233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.313247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.313408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.313427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.313524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.313539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.313627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.313642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.313740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.313755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.313887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.313902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.313978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.313993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.314063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.314078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.314162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.314177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.314249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.314264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.314346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.314361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.314428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.314443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.314536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.314551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.314746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.314763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.314846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.314864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.314949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.314964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.315053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.315068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.315143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.315158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.315303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.315318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.315400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.315414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.315557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.315573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.315668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.315684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.315752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.315767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.315856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.315871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.315957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.315972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.316058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.316073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.316219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.316234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.316315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.316330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.316410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.316425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.316503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.316518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.316620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.316635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.316782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.316798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.379 [2024-12-09 11:15:20.316882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.379 [2024-12-09 11:15:20.316897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.379 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.316984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.316999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.317074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.317089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.317171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.317185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.317322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.317337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.317477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.317493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.317579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.317593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.317729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.317744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.317879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.317894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.317972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.317988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.318067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.318082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.318169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.318184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.318265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.318280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.318363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.318378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.318460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.318475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.318544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.318559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.318632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.318652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.318788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.318803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.318889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.318904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.318986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.319001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.319081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.319096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.319186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.319201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.319335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.319355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.319491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.319506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.319579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.319594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.319728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.319744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.319895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.319910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.319983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.319998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.320093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.320108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.320186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.320201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.320272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.320287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.320373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.320387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.320528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.320544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.320624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.320639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.320719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.320734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.320817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.320832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.320909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.320924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.321061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.321076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.380 qpair failed and we were unable to recover it. 01:04:19.380 [2024-12-09 11:15:20.321156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.380 [2024-12-09 11:15:20.321170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.321263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.321278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.321436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.321452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.321586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.321601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.321737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.321753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.321831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.321846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.321930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.321945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.322080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.322095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.322233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.322248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.322341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.322356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.322433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.322449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.322597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.322614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.322708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.322723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.322788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.322803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.322947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.322961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.323036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.323051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.323137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.323151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.323287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.323302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.323398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.323413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.323507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.323522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.323598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.323613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.323683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.323698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.323785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.323801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.323893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.323907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.324040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.324057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.324128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.324142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.324215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.324230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.324318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.324332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.324400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.324414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.324542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.324557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.324631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.324651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.324740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.324755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.324822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.324837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.324976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.324991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.325064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.325081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.325164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.325179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.325264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.381 [2024-12-09 11:15:20.325278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.381 qpair failed and we were unable to recover it. 01:04:19.381 [2024-12-09 11:15:20.325420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.325435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.325655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.325670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.325744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.325759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.325828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.325843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.325978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.325993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.326071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.326086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.326162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.326177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.326308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.326323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.326465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.326481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.326556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.326570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.326720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.326736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.326811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.326826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.326912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.326927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.327024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.327039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.327141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.327159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.327234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.327250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.327334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.327349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.327459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.327474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.327552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.327567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.327657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.327673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.327749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.327764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.327849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.327865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.327996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.328010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.328145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.328160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.328234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.328250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.328335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.328349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.328433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.328449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.328537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.328554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.328636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.328654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.328753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.328768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.328916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.328931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.329018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.329033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.329124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.382 [2024-12-09 11:15:20.329139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.382 qpair failed and we were unable to recover it. 01:04:19.382 [2024-12-09 11:15:20.329213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.329228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.329297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.329313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.329401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.329417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.329507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.329522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.329604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.329620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.329700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.329715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.329874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.329889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.329995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.330009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.330100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.330115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.330258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.330273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.330383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.330397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.330478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.330494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.330567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.330581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.330751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.330767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.330904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.330921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.330997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.331013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.331087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.331102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.331185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.331200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.331342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.331357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.331435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.331450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.331530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.331545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.331622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.331637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.331784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.331799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.331947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.331961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.332115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.332130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.332230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.332245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.332355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.332370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.332445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.332460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.332602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.332617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.332721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.332737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.332823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.332838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.332939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.332955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.333026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.333042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.333188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.333203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.333280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.333295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.333384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.333400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.333473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.383 [2024-12-09 11:15:20.333488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.383 qpair failed and we were unable to recover it. 01:04:19.383 [2024-12-09 11:15:20.333560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.333575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.333651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.333666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.333744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.333760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.333849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.333866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.333968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.333985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.334067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.334083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.334157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.334171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.334300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.334315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.334392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.334407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.334559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.334574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.334663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.334678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.334758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.334773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.334914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.334929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.335062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.335078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.335168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.335182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.335256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.335271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.335350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.335364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.335435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.335450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.335538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.335554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.335696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.335712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.335796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.335811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.335894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.335909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.336039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.336055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.336145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.336160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.336246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.336264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.336354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.336369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.336441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.336455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.336664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.336681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.336756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.336770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.336846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.336861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.336931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.336945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.337080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.337096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.337173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.337188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.337261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.337276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.337365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.337381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.384 [2024-12-09 11:15:20.337472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.384 [2024-12-09 11:15:20.337487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.384 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.337556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.337571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.337654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.337669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.337753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.337767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.337845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.337861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.337938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.337953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.338037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.338124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.338225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.338316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.338408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.338520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.338627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.338723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.338820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.338911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.338990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.339006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.339097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.339113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.339181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.339196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.339332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.339347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.339423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.339439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.339522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.339536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.339669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.339685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.339774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.339788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.339923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.339938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.340021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.340036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.340136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.340152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.340227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.340242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.340381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.340396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.340480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.340497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.340566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.340581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.340661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.340676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.340810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.340825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.340918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.340934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.341018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.341034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.341114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.341129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.341207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.385 [2024-12-09 11:15:20.341222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.385 qpair failed and we were unable to recover it. 01:04:19.385 [2024-12-09 11:15:20.341303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.341319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.341464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.341478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.341550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.341564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.341652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.341667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.341799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.341814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.341886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.341901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.341984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.341999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.342200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.342216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.342354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.342369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.342457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.342471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.342618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.342632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.342722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.342737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.342826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.342843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.342945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.342962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.343044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.343058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.343150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.343165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.343250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.343264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.343465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.343481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.343566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.343580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.343669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.343684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.343761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.343775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.343847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.343862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.343936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.343951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.344028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.344043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.344122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.344137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.344209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.344224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.344304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.344319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.344462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.344478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.344576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.344591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.344665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.344680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.344758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.344773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.344850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.344865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.344947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.344963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.345047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.345062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.345138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.345153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.345224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.345238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.345310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.345325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.345420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.345435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.345537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.345553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.345627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.345642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.386 qpair failed and we were unable to recover it. 01:04:19.386 [2024-12-09 11:15:20.345784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.386 [2024-12-09 11:15:20.345799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.345878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.345892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.345969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.345984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.346057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.346072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.346146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.346160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.346231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.346245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.346322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.346337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.346412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.346426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.346509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.346524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.346659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.346674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.346774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.346790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.346861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.346875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.346953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.346968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.347039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.347055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.347136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.347151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.347225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.347240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.347317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.347332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.347421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.347436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.347521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.347535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.348074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.348101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.348191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.348207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.348295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.348311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.348382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.348396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.348475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.348489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.348561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.348576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.348656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.348674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.348768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.348782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.348869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.348884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.387 qpair failed and we were unable to recover it. 01:04:19.387 [2024-12-09 11:15:20.348974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.387 [2024-12-09 11:15:20.348990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.349067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.349082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.349155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.349169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.349250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.349266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.349354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.349372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.349524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.349542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.349630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.349652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.349724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.349739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.349811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.349825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.349894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.349910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.349987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.350001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.350088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.350104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.350178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.350192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.350269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.350284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.350359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.350374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.350508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.350522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.350590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.350604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.350697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.350712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.350804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.350818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.350969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.350984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.351077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.351092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.351182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.351198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.351287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.351302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.351380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.351396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.351472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.351487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.351629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.351652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.351730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.351745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.351892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.351907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.351982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.351996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.352079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.352094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.352172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.352186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.352339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.352355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.352444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.352459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.352526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.352541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.352632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.352674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.352766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.352780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.352867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.352882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.352974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.352989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.353082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.353097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.353171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.353185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.353330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.353346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.353445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.353460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.353538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.353553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.353687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.353703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.353785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.353803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.353887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.353901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.353985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.354000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.354123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.354137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.354214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.354229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.354304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.354318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.354453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.388 [2024-12-09 11:15:20.354469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.388 qpair failed and we were unable to recover it. 01:04:19.388 [2024-12-09 11:15:20.354550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.354564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.354656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.354671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.354806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.354821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.354892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.354907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.354981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.354996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.355084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.355189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.355292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.355382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.355469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.355558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.355660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.355749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.355838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.355921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.355991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.356006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.356084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.356098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.356244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.356259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.356397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.356412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.356490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.356504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.356593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.356607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.356698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.356715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.356858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.356872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.356955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.356970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.357055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.357069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.357141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.357156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.357221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.357239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.357320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.357335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.357414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.357429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.357572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.357587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.357661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.357676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.357748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.357763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.357839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.357855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.357928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.357944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.358032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.358047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.358130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.358146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.358279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.358294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.358457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.358472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.358552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.358566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.358631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.358650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.358740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.358754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.358823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.358838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.358914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.358928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.359062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.359076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.359169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.359184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.359258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.359273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.359361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.359378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.359463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.359479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.359616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.359632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.359739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.359754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.359834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.359848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.359920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.389 [2024-12-09 11:15:20.359935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.389 qpair failed and we were unable to recover it. 01:04:19.389 [2024-12-09 11:15:20.360010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.360024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.360109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.360123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.360199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.360213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.360296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.360311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.360449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.360465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.360551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.360566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.360642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.360664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.360742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.360756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.360861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.360877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.360964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.360979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.361087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.361172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.361267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.361372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.361459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.361541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.361637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.361737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.361829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.361914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.361989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.362004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.362089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.362106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.362186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.362201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.362271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.362286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.362362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.362376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.362448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.362462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.362537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.362552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.362629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.362655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.362730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.362746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.362895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.362910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.363017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.363032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.363128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.363143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.363281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.363296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.363377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.363391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.363468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.363483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.363562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.363577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.363663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.363678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.363753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.363769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.363841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.363855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.363929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.363944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.364041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.364056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.364192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.364206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.364282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.364297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.364370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.364384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.364455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.364469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.364557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.390 [2024-12-09 11:15:20.364572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.390 qpair failed and we were unable to recover it. 01:04:19.390 [2024-12-09 11:15:20.364657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.364672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.364748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.364762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.364857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.364877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.364956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.364971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.365051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.365066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.365209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.365225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.365359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.365374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.365463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.365477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.365552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.365566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.365640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.365661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.365756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.365771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.365844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.365858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.365926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.365941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.366009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.366024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.366097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.366112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.366202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.366219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.366354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.366369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.366457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.366472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.366554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.366569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.366653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.366669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.366749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.366764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.366910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.366925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.367967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.367981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.368049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.368064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.368194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.368208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.368343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.368359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.368442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.368457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.368528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.368542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.368609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.368624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.368764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.368779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.368861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.368876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.369011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.369027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.369165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.369184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.369319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.369334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.369428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.369443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.369579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.369595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.369682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.369698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.369774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.369789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.369857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.369872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.369949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.369964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.370048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.370064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.391 [2024-12-09 11:15:20.370173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.391 [2024-12-09 11:15:20.370189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.391 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.370321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.370336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.370428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.370443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.370528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.370543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.370683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.370701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.370777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.370792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.370944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.370959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.371048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.371063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.371133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.371148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.371224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.371239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.371327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.371342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.371482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.371497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.371641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.371660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.371732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.371747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.371823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.371838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.371926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.371941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.372016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.372031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.372114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.372129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.372203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.372218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.372433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.372448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.372582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.372597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.372677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.372692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.372827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.372842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.372988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.373003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.373086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.373101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.373185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.373201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.373338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.373353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.373427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.373442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.373528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.373543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.373613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.373628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.373708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.373723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.373796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.373811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.373912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.373927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.374000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.374016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.374092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.374108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.374178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.374193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.374271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.374286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.374372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.374388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.374462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.374477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.374563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.374578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.374726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.374742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.374826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.374841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.374917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.374932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.375063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.375079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.375165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.375182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.375257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.375272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.375348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.375363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.375441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.375455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.375537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.375551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.375662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.375678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.375746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.375761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.375896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.375910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.375999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.392 [2024-12-09 11:15:20.376014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.392 qpair failed and we were unable to recover it. 01:04:19.392 [2024-12-09 11:15:20.376091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.376106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.376189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.376203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.376277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.376292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.376363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.376378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.376458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.376474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.376610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.376625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.376771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.376787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.376866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.376881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.377017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.377032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.377109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.377124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.377215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.377230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.377310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.377324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.377421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.377436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.377515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.377530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.377606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.377621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.377701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.377717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.377796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.377812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.377947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.377962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.378055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.378072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.378167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.378183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.378260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.378275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.378349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.378364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.378449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.378464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.378532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.378547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.378621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.378636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.378723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.378738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.378877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.378892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.378976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.378991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.379075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.379089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.379160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.379175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.379247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.379262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.379336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.379353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.379428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.379443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.379518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.379533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.379611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.379626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.379777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.379792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.379923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.379939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.380021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.380036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.380112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.380127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.380221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.380236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.380331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.380346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.380546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.380561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.380649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.380664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.380767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.380782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.380865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.380880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.380970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.380986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.381057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.381072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.381144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.381159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.381240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.381255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.381323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.381337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.381408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.381423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.381491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.393 [2024-12-09 11:15:20.381506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.393 qpair failed and we were unable to recover it. 01:04:19.393 [2024-12-09 11:15:20.381649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.381665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.381738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.381753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.381839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.381854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.381922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.381936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.382016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.382031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.382165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.382180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.382278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.382295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.382372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.382387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.382463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.382478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.382553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.382568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.382635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.382653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.382729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.382745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.382821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.382836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.382914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.382929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 [2024-12-09 11:15:20.383709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.383971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.383985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.384077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.384092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.384164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.384179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.384256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.384271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.384356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.384370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.384453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.384468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.384617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.384632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.384734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.384750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.384833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.384849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.384932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.384946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.385034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.385120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.385209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.385309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.385401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.385485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.385567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.385662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.385754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.385921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.385997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.386012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.386096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.386111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.386200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.394 [2024-12-09 11:15:20.386215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.394 qpair failed and we were unable to recover it. 01:04:19.394 [2024-12-09 11:15:20.386284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.386299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.386372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.386387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.386477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.386492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.386567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.386582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.386654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.386671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.386757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.386772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.386841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.386856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.386940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.386955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.387036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.387051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.387130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.387147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.387216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.387230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.387362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.387378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.387453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.387471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.387609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.387625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.387714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.387730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.387804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.387819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.387902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.387917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.387986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.388001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.388074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.388089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.388165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.388181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.388317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.388333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.388487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.388503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.388577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.388592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.388678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.388694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.388787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.388803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.388877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.388892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.388966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.388980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.389064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.389080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.389160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.389175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.389261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.389276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.389354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.389369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.389469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.389484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.389559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.389574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.389722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.389738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.389816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.389832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.389932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.389947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.390022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.390036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.390130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.390145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.390221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.390238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.390320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.390336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.390409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.390424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.390566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.390583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.390658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.390674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.390754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.390770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.390854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.390871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.391005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.391020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.391098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.391112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.391247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.391263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.391342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.391358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.391511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.391528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.391664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.391679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.391766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.391781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.395 qpair failed and we were unable to recover it. 01:04:19.395 [2024-12-09 11:15:20.391852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.395 [2024-12-09 11:15:20.391869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.391946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.391961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.392046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.392062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.392201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.392217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.392288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.392303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.392372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.392387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.392540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.392557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.392652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.392669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.392744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.392760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.392902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.392919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.392994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.393009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.393083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.393097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.393194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.393210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.393291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.393306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.393451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.393469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.393537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.393552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.393638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.393672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.393810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.393826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.393901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.393917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.394004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.394020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.394161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.394177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.394249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.394264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.394340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.394355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.394424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.394441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.394538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.394554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.394626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.394641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.394736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.394752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.394832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.394849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.394936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.394952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.395029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.395044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.395120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.395136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.395214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.395230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.395365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.395382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.395458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.395473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.395554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.395569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.395655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.395670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.395749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.395763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.395845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.395860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.395934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.395949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.396025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.396040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.396179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.396197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.396286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.396302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.396378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.396393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.396543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.396558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.396700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.396716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.396793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.396808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.396882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.396897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.396974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.396988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.397060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.397075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.397172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.397187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.397262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.397276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.397377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.397392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.396 [2024-12-09 11:15:20.397534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.396 [2024-12-09 11:15:20.397549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.396 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.397625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.397640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.397757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.397772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.397911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.397926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.398016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.398031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.398118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.398132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.398211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.398226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.398310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.398325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.398462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.398476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.398553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.398568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.398665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.398681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.398829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.398844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.398923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.398938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.399070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.399084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.399154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.399169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.399242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.399257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.399392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.399407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.399498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.399513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.399611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.399626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.399787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.399802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.399872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.399887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.399964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.399979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.400072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.400087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.400161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.400176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.400247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.400262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.400334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.400349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.400433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.400447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.400519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.400534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.400680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.400705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.400796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.400813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.400957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.400974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.401051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.401069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.401217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.401234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.401390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.401407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.401562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.401579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.401720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.401736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.401828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.401843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.401919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.401934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.402022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.402037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.402131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.402146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.402250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.402265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.402352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.402366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.402515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.402529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.402608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.402623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.402720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.402735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.402879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.402894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.402971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.397 [2024-12-09 11:15:20.402986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.397 qpair failed and we were unable to recover it. 01:04:19.397 [2024-12-09 11:15:20.403068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.403083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.403164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.403179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.403275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.403289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.403375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.403390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.403464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.403479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.403562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.403576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.403716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.403732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.403812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.403827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.403911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.403925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.403993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.404008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.404091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.404105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.404180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.404195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.404336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.404351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.404428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.404443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.404603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.404618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.404711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.404727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.404864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.404879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.404950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.404965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.405051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.405066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.405145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.405160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.405314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.405329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.405413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.405429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.405520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.405534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.405603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.405617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.405723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.405739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.405826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.405841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.405911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.405925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.406009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.406024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.406096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.406111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.406185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.406199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.406331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.406347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.406484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.406498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.406597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.406612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.406691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.406707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.406786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.406801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.406958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.406973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.407054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.407069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.407139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.407153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.407229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.407244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.407328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.407342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.407484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.407499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.407581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.407596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.407730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.407746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.407831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.407846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.407927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.407942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.408024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.408040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.408139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.408154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.408239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.408254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.408356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.408370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.408477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.408491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.408573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.408587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.408660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.398 [2024-12-09 11:15:20.408675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.398 qpair failed and we were unable to recover it. 01:04:19.398 [2024-12-09 11:15:20.408752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.408767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.408851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.408866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.408952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.408967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.409079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.409094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.409172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.409187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.409254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.409269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.409354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.409368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.409437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.409452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.409589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.409604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.409681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.409698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.409783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.409798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.409871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.409886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.409963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.409978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.410074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.410089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.410178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.410193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.410324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.410339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.410415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.410429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.410511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.410525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.410659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.410675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.410762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.410776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.410845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.410860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.410946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.410960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.411048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.411063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.411140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.411155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.411232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.411247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.411324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.411339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.411430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.411444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.411518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.411534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.411682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.411698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.411770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.411785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.411877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.411892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.411967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.411982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.412053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.412067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.412146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.412161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.412242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.412256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.412334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.412349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.412437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.412453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.412526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.412541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.412624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.412639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.412795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.412811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.412912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.412927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.413015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.413030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.413176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.413191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.413266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.413281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.413361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.413376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.413450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.413465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.413539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.413554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.413621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.413635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.413726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.413742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.413881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.413898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.413981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.399 [2024-12-09 11:15:20.413996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.399 qpair failed and we were unable to recover it. 01:04:19.399 [2024-12-09 11:15:20.414136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.414151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.414232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.414247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.414320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.414334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.414408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.414423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.414504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.414520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.414663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.414679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.414752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.414766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.414853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.414869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.414950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.414964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.415037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.415052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.415125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.415141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.415224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.415238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.415330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.415345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.415422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.415437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.415575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.415590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.415664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.415680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.415752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.415767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.415852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.415866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.416064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.416079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.416212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.416227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.416310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.416325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.416426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.416441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.416527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.416542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.416614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.416628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.416712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.416727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.416828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.416856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.417035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.417058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.417163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.417178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.417265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.417280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.417360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.417375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.417516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.417531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.417671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.417687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.417769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.417784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.417871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.417886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.417980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.417995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.418147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.418162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.418242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.418257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.418400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.418415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.418552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.418571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.418671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.418687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.418832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.418847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.418927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.418942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.419017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.419032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.419105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.419119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.419210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.419224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.419348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.419363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.419496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.419511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.419587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.419602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.400 qpair failed and we were unable to recover it. 01:04:19.400 [2024-12-09 11:15:20.419796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.400 [2024-12-09 11:15:20.419812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.419910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.419925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.419996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.420011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.420152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.420167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.420329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.420345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.420428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.420443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.420592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.420607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.420701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.420717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.420803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.420817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.420910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.420926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.421005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.421020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.421102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.421116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.421256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.421271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.421417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.421432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.421588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.421602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.421690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.421706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.421840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.421855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.422009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.422026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.422165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.422180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.422265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.422281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.422367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.422382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.422467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.422483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.422585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.422600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.422686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.422702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.422806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.422821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.422898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.422913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.422995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.423010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.423100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.423115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.423192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.423207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.423310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.423325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.423400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.423418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.423496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.423511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.423621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.423636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.423714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.423729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.423860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.423875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.423944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.423959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.424095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.424110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.424187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.424202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.424346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.424361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.424444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.424458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.424592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.424607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.424691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.424707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.424812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.424827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.424904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.424919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.425009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.425024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.425102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.425117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.425185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.425200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.425300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.425314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.425407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.425423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.425501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.425516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.425594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.401 [2024-12-09 11:15:20.425609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.401 qpair failed and we were unable to recover it. 01:04:19.401 [2024-12-09 11:15:20.425692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.425707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.425803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.425819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.425887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.425902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.425973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.425988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.426076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.426091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.426246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.426262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.426356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.426372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.426442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.426458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.426534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.426550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.426626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.426642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.426731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.426747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.426816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.426831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.426911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.426926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.426999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.427014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.427147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.427162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.427230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.427245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.427316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.427331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.427549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.427565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.427702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.427719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.427807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.427826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.427973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.427989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.428063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.428079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.428146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.428162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.428246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.428262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.428397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.428414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.428548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.428565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.428664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.428681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.428765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.428781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.428878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.428894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.429032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.429047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.429121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.429136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.429275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.429275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:19.402 [2024-12-09 11:15:20.429291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.429304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:19.402 [2024-12-09 11:15:20.429318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:19.402 [2024-12-09 11:15:20.429328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:19.402 [2024-12-09 11:15:20.429336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:19.402 [2024-12-09 11:15:20.429386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.429400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.429493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.429507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.429592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.429606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.429695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.429709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.429795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.429809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.429887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.429900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.430038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.430053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.430132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.430147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.430218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.430232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.430335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.430350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.430438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.430453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.430529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.430543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.430630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.430655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.430838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.430853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.430779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:04:19.402 [2024-12-09 11:15:20.431002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.430882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:04:19.402 [2024-12-09 11:15:20.431018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.430909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 01:04:19.402 [2024-12-09 11:15:20.430910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:04:19.402 [2024-12-09 11:15:20.431116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.431130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.431212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.431226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.402 qpair failed and we were unable to recover it. 01:04:19.402 [2024-12-09 11:15:20.431377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.402 [2024-12-09 11:15:20.431391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.431480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.431495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.431572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.431587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.431683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.431699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.431781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.431795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.431867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.431882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.432035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.432051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.432133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.432151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.432241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.432256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.432345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.432361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.432438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.432453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.432592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.432606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.432715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.432731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.432813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.432828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.433008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.433024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.433160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.433177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.433246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.433262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.433401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.433417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.433574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.433590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.433732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.433748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.433827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.433842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.433938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.433953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.434134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.434150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.434251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.434267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.434343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.434358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.434491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.434505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.434634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.434654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.434741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.434757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.434843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.434859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.434933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.434948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.435080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.435096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.435194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.435209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.435300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.435314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.435385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.435401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.435602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.435618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.435767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.435783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.435866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.435881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.436016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.436031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.436134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.436149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.436238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.436253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.436347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.436363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.436438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.436453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.436537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.436552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.436641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.436661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.436740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.436755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.436896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.436911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.436989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.437005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.437081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.437101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.437281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.437297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.437379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.437394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.437556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.437570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.437664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.437680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.403 [2024-12-09 11:15:20.437759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.403 [2024-12-09 11:15:20.437774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.403 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.437911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.437927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.438057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.438073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.438214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.438229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.438309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.438324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.438405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.438419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.438570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.438585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.438667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.438682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.438778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.438793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.438877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.438892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.438991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.439008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.439096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.439111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.439218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.439234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.439313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.439329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.439417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.439432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.439525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.439541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.439681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.439698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.439769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.439785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.439866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.439881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.439959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.439974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.440057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.440072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.440151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.440166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.440366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.440382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.440467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.440482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.440621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.440639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.440726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.440742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.440848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.440864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.440942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.440958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.441098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.441115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.441190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.441206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.441337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.441354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.441428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.441444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.441520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.441536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.441619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.441635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.441731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.441746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.441898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.441919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.441989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.442006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.442075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.442091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.442194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.442210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.442361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.442378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.442462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.442478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.442564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.442579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.442659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.442676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.442745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.442762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.442899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.442916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.443052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.443068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.443210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.404 [2024-12-09 11:15:20.443225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.404 qpair failed and we were unable to recover it. 01:04:19.404 [2024-12-09 11:15:20.443317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.443332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.443470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.443486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.443591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.443606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.443757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.443774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.443862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.443877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.444010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.444026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.444099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.444114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.444201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.444216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.444312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.444327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.444464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.444481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.444554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.444570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.444651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.444667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.444810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.444826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.444906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.444922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.444994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.445010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.445197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.445214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.445348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.445363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.445446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.445461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.445535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.445551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.445628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.445648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.445730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.445745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.445817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.445832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.445915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.445931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.446045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.446060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.446173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.446190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.446265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.446281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.446359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.446375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.446514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.446529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.446603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.446621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.446714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.446731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.446825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.446841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.446931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.446948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.447020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.447036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.447108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.447123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.447217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.447233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.447310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.447327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.447413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.447428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.447518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.447533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.447613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.447628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.447788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.447817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.447915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.447936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.448024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.448040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.448129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.448144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.448218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.448234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.448312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.448328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.448460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.448476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.448561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.448576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.448662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.448678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.448766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.448782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.448867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.448882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.448969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.448984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.449061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.449077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.405 [2024-12-09 11:15:20.449182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.405 [2024-12-09 11:15:20.449198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.405 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.449277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.449292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.449376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.449392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.449554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.449572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.449652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.449668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.449744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.449759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.449847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.449862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.449943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.449958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.450045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.450060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.450194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.450210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.450284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.450300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.450378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.450393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.450470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.450485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.450550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.450566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.450648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.450664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.450738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.450752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.450826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.450844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.450979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.450995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.451075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.451090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.451164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.451179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.451259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.451274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.451351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.451366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.451456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.451471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.451608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.451623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.451769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.451785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.451869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.451883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.452971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.452986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.453070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.453085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.453218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.453233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.453320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.453335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.453482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.453497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.453581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.453596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.453668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.453684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.453774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.453789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.453883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.453899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.454048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.454063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.454198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.454214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.454306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.454321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.454406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.454421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.454498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.454513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.454602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.454617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.454694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.454710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.454795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.406 [2024-12-09 11:15:20.454810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.406 qpair failed and we were unable to recover it. 01:04:19.406 [2024-12-09 11:15:20.454945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.454960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.455037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.455052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.455144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.455160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.455297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.455317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.455471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.455487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.455569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.455585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.455657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.455673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.455759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.455775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.455860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.455875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.456030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.456046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.456143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.456158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.456292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.456308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.456387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.456402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.456485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.456501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.456588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.456605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.456697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.456713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.456795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.456811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.456894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.456910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.456984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.457000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.457138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.457154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.457247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.457263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.457356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.457372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.457510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.457527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.457629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.457647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.457735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.457751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.457837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.457853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.457936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.457951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.458043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.458058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.458147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.458164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.458247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.458263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.458353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.458378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.458490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.458510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.458602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.458617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.458703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.458720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.458794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.458810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.458897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.458913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.458988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.459005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.459098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.459114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.459186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.459202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.459275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.459291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.459370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.459386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.459470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.459486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.459560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.459575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.459713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.459732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.459817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.459833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.459931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.459947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.460018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.407 [2024-12-09 11:15:20.460034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.407 qpair failed and we were unable to recover it. 01:04:19.407 [2024-12-09 11:15:20.460126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.460141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.460209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.460225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.460313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.460329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.460415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.460431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.460528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.460543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.460614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.460630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.460714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.460729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.460823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.460838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.460988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.461004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.461089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.461104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.461182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.461197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.461273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.461288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.461427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.461443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.461520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.461536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.461649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.461665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.461807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.461825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.461908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.461924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.462001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.462017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.462082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.462098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.462176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.462193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.462350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.462366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.462472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.462488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.462595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.462611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.462732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.462757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.462841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.462857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.462953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.462969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.463058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.463074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.463159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.463175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.463243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.463258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.463395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.463411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.463561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.463576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.463654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.463669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.463753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.463768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.463838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.463853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.463939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.463954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.464024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.464039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.464181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.464200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.464277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.464292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.464439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.464454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.464528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.464543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.464629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.464650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.464788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.464804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.464884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.464899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.465032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.465048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.465135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.465151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.465224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.465240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.465312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.465328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.465445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.465462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.465551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.465567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.465704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.465720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.465804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.465820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.465922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.408 [2024-12-09 11:15:20.465939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.408 qpair failed and we were unable to recover it. 01:04:19.408 [2024-12-09 11:15:20.466031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.466048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.466125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.466140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.466228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.466244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.466330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.466345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.466494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.466513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.466654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.466674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.466763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.466781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.466920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.466939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.467028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.467044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.467145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.467170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.467262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.467283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.467388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.467426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.467595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.467611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.467702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.467718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.467791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.467806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.467886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.467901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.468002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.468017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.468101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.468115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.468196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.468211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.468288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.468303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.468397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.468412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.468565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.468580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.468655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.468671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.468743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.468758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.468843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.468860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.469964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.469979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.470060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.470075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.470160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.470175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.470276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.470291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.470375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.470390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.470462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.470477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.470553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.470568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.470653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.470668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.470758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.470772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.470912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.470927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.471002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.471017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.471173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.471188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.471333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.471348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.471424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.471438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.471513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.471529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.471667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.471683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.471780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.409 [2024-12-09 11:15:20.471797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.409 qpair failed and we were unable to recover it. 01:04:19.409 [2024-12-09 11:15:20.471866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.471881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.471956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.471971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.472107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.472122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.472194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.472210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.472301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.472316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.472448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.472462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.472547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.472562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.472636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.472656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.472732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.472747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.472838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.472853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.472926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.472941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.473018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.473033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.473166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.473181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.473254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.473268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.473348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.473363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.473449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.473465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.473615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.473630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.473721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.473737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.473838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.473853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.473927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.473942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.474017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.474031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.474116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.474130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.474210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.474225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.474357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.474372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.474445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.474459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.474538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.474553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.474637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.474658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.474875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.474890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.474971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.474985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.475056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.475071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.475164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.475179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.475316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.475330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.475475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.475490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.475563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.475579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.475683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.475699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.475836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.475851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.475997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.476012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.476087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.476102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.476248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.476263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.476348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.476366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.476453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.476468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.476547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.476561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.476718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.476735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.476874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.476890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.476963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.476980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.477065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.477080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.477153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.477169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.477247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.477263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.410 qpair failed and we were unable to recover it. 01:04:19.410 [2024-12-09 11:15:20.477353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.410 [2024-12-09 11:15:20.477369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.477523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.477539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.477632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.477652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.477789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.477805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.477886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.477901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.477998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.478013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.478087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.478102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.478185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.478200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.478283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.478298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.478389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.478404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.478487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.478503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.478583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.478599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.478693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.478708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.478788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.478803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.478903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.478918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.479063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.479079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.479168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.479184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.479276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.479291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.479379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.479394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.479472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.479487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.479577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.479592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.479669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.479686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.479785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.479802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.479879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.479894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.480043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.480059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.480141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.480158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.480298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.480315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.480407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.480423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.480566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.480582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.480734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.480750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.480885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.480901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.480977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.480996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.481082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.481096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.481232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.481249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.481349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.481364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.481443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.481459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.481542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.481557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.481661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.481685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.481764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.481781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.481872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.481890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.481986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.482002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.482077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.482092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.482171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.482186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.482254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.482270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.482359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.482376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.482459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.482476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.482561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.482577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.482658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.482675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.482772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.482788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.482880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.411 [2024-12-09 11:15:20.482896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.411 qpair failed and we were unable to recover it. 01:04:19.411 [2024-12-09 11:15:20.482964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.482980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.483070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.483085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.483166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.483181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.483263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.483279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.483354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.483370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.483443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.483459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.483531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.483548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.483628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.483650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.483731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.483747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.483826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.483842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.483930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.483946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.484027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.484042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.484127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.484142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.484299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.484315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.484390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.484406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.484506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.484521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.484595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.484612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.484749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.484765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.484836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.484852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.485049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.485068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.485164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.485182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.485271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.485291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.485390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.485407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.485494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.485510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.485586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.485601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.485696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.485712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.485795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.485810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.485893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.485908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.486052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.486068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.486148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.486163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.486302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.486317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.486400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.486415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.486499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.486513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.486653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.486669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.486741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.486755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.486842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.486857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.486932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.486947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.487020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.487035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.487107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.487123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.487211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.487226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.412 [2024-12-09 11:15:20.487316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.412 [2024-12-09 11:15:20.487332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.412 qpair failed and we were unable to recover it. 01:04:19.708 [2024-12-09 11:15:20.487416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.708 [2024-12-09 11:15:20.487433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.708 qpair failed and we were unable to recover it. 01:04:19.708 [2024-12-09 11:15:20.487510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.708 [2024-12-09 11:15:20.487525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.708 qpair failed and we were unable to recover it. 01:04:19.708 [2024-12-09 11:15:20.487675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.708 [2024-12-09 11:15:20.487692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.708 qpair failed and we were unable to recover it. 01:04:19.708 [2024-12-09 11:15:20.487789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.708 [2024-12-09 11:15:20.487805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.708 qpair failed and we were unable to recover it. 01:04:19.708 [2024-12-09 11:15:20.487887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.487903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.487993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.488009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.488091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.488107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.488208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.488224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.488420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.488437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.488534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.488550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.488622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.488638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.488722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.488738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.488828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.488844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.488925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.488942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.489031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.489047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.489125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.489141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.489279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.489295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.489368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.489383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.489476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.489491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.489569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.489584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.489658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.489676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.489754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.489769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.489850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.489867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.489941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.489956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.490036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.490051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.490122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.490138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.490214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.490229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.490315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.490330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.490407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.490422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.490557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.490572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.490707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.490723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.490815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.490829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.490902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.490917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.490990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.491005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.491099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.491114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.491204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.491218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.491368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.491383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.491513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.491528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.491597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.491611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.491689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.491705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.491784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.491799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.709 [2024-12-09 11:15:20.491889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.709 [2024-12-09 11:15:20.491903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.709 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.491976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.491990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.492059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.492074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.492152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.492167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.492267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.492282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.492377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.492391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.492475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.492490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.492564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.492579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.492669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.492684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.492881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.492896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.492975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.492990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.493079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.493093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.493233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.493248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.493342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.493357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.493435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.493450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.493538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.493553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.493699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.493714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.493784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.493799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.493876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.493890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.493959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.493980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.494120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.494135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.494211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.494227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.494307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.494322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.494395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.494410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.494545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.494562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.494712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.494728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.494826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.494840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.494923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.494937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.495029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.495044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.495119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.495133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.495267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.495282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.495360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.495375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.495512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.495526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.495675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.495690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.495764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.495778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.495868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.495882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.496020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.496035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.496184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.496199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.496276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.496291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.710 [2024-12-09 11:15:20.496368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.710 [2024-12-09 11:15:20.496382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.710 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.496521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.496535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.496610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.496624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.496711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.496726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.496802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.496816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.496892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.496907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.497040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.497055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.497132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.497148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.497292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.497307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.497475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.497489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.497594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.497620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.497720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.497737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.497827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.497842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.497915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.497930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.498009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.498023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.498095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.498110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.498250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.498264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.498367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.498382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.498474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.498489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.498630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.498651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.498811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.498830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.498919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.498934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.499067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.499082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.499169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.499184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.499254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.499269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.499468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.499483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.499558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.499573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.499655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.499670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.499742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.499757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.499827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.499842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.499922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.499937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.500056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.500070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.500142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.500156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.500244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.500258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.500348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.500363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.500432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.500447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.500523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.500538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.500610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.500625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.500699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.500714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.711 qpair failed and we were unable to recover it. 01:04:19.711 [2024-12-09 11:15:20.500825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.711 [2024-12-09 11:15:20.500840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.500933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.500948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.501045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.501059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.501269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.501284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.501359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.501373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.501442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.501457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.501540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.501555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.501703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.501718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.501852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.501866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.501961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.501975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.502126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.502141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.502275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.502290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.502377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.502392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.502473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.502487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.502569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.502584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.502663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.502678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.502826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.502841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.502932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.502947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.503106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.503120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.503201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.503216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.503289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.503303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.503378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.503396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.503529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.503544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.503638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.503658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.503796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.503811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.503895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.503909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.503978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.503993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.504090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.504105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.504240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.504255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.504394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.504409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.504490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.504505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.504649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.504664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.504739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.504753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.504834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.504849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.504917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.504931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.505018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.505033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.505118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.505133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.505211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.505226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.505320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.712 [2024-12-09 11:15:20.505335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.712 qpair failed and we were unable to recover it. 01:04:19.712 [2024-12-09 11:15:20.505417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.505431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.505519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.505533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.505609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.505624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.505703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.505718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.505810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.505824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.505900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.505914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.505986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.506000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.506085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.506100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.506235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.506249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.506399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.506423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.506503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.506519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.506605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.506619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.506773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.506788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.506871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.506886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.507028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.507043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.507118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.507133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.507210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.507225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.507313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.507327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.507463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.507478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.507563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.507578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.507655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.507670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.507819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.507833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.507907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.507924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.508012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.508027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.508177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.508191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.508276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.508290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.508366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.508381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.508462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.508477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.508543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.508558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.508695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.508711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.508787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.508802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.508893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.508908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.508986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.509001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.713 [2024-12-09 11:15:20.509085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.713 [2024-12-09 11:15:20.509099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.713 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.509172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.509187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.509256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.509271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.509351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.509366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.509501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.509515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.509588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.509603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.509678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.509693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.509773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.509788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.509870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.509885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.510021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.510036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.510107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.510122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.510208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.510223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.510312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.510327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.510401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.510416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.510493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.510507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.510584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.510599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.510684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.510702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.510848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.510864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.510964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.510979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.511051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.511066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.511136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.511150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.511229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.511243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.511320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.511335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.511413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.511428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.511555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.511569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.511767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.511783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.511948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.511963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.512049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.512064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.512135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.512150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.512237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.512256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.512452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.512467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.512553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.512568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.512715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.512730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.512811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.512825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.512912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.512927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.513059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.513073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.513210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.513225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.513310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.513324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.513414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.513429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.714 [2024-12-09 11:15:20.513563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.714 [2024-12-09 11:15:20.513578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.714 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.513661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.513676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.513765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.513780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.513920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.513934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.514031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.514045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.514129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.514144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.514217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.514232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.514308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.514323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.514484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.514499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.514705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.514720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.514807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.514821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.514970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.514985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.515064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.515079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.515173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.515188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.515272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.515287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.515425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.515440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.515523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.515537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.515625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.515639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.515718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.515733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.515819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.515834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.515922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.515937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.516026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.516040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.516122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.516136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.516209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.516224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.516367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.516381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.516459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.516473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.516549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.516564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.516651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.516666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.516742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.516756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.516842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.516857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.516952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.516969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.517109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.517123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.517276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.517291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.517366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.517380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.517461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.517476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.517561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.517576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.517649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.517664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.517812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.517827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.517930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.517944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.518014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.715 [2024-12-09 11:15:20.518028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.715 qpair failed and we were unable to recover it. 01:04:19.715 [2024-12-09 11:15:20.518165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.518180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.518350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.518365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.518439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.518454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.518597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.518611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.518748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.518763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.518852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.518866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.518946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.518960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.519051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.519066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.519146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.519161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.519290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.519305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.519441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.519456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.519536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.519550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.519694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.519710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.519799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.519813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.519885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.519900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.520054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.520068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.520164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.520179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.520343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.520359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.520511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.520526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.520630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.520653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.520790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.520805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.520889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.520904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.521002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.521017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.521099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.521114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.521198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.521212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.521302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.521317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.521457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.521472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.521544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.521559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.521658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.521673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.521762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.521777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.521857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.521874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.522007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.522022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.522103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.522117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.522189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.522204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.522283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.522298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.522436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.522451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.522534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.522549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.522686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.522702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.522775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.716 [2024-12-09 11:15:20.522790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.716 qpair failed and we were unable to recover it. 01:04:19.716 [2024-12-09 11:15:20.522861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.522875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.522956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.522971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.523042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.523057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.523133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.523148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.523287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.523302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.523374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.523388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.523464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.523479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.523568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.523583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.523718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.523734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.523809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.523824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.523903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.523917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.524052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.524068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.524142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.524157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.524246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.524261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.524339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.524353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.524551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.524566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.524648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.524663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.524748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.524763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.524970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.524989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.525071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.525086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.525174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.525189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.525278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.525293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.525439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.525454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.525539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.525554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.525630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.525650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.525760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.525775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.525915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.525931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.526069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.526084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.526176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.526191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.526323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.526338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.526416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.526431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.526532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.526549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.526635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.526654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.717 [2024-12-09 11:15:20.526792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.717 [2024-12-09 11:15:20.526807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.717 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.526876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.526891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.526978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.526993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.527138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.527153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.527234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.527249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.527384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.527399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.527483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.527498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.527592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.527607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.527747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.527762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.527859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.527874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.527954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.527969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.528044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.528059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.528151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.528166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.528260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.528275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.528348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.528362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.528500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.528515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.528659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.528674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.528811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.528826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.528908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.528924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.529125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.529139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.529215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.529230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.529376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.529391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.529465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.529480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.529630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.529650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.529729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.529745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.529823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.529838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.530037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.530052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:19.718 [2024-12-09 11:15:20.530133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.530149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.530227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.530244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.530393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.530409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 01:04:19.718 [2024-12-09 11:15:20.530496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.530517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.530605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.530620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.530701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.530716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.530789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.530804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:04:19.718 [2024-12-09 11:15:20.530894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.530909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.531003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.531019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.531091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.531106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 01:04:19.718 [2024-12-09 11:15:20.531179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.718 [2024-12-09 11:15:20.531202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.718 qpair failed and we were unable to recover it. 01:04:19.718 [2024-12-09 11:15:20.531282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.531296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.531380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.531395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.531468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:19.719 [2024-12-09 11:15:20.531483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.531555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.531570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.531654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.531669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.531747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.531762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.531855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.531870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.531946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.531960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.532046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.532061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.532140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.532155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.532233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.532247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.532319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.532334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.532424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.532439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.532517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.532531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.532614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.532629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.532737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.532753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.532833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.532852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.532931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.532947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.533026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.533042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.533125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.533140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.533208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.533223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.533295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.533309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.533381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.533396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.533527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.533542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.533637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.533656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.533740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.533756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.533832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.533847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.533993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.534009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.534088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.534102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.534184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.534200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.534282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.534296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.534444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.534458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.534538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.534552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.534636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.534656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.534741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.534756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.534827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.534842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.534925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.534939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.535023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.535040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.719 [2024-12-09 11:15:20.535124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.719 [2024-12-09 11:15:20.535142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.719 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.535246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.535261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.535346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.535361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.535453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.535468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.535540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.535555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.535629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.535648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.535796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.535811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.535888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.535902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.535996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.536010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.536095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.536111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.536248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.536263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.536353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.536368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.536439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.536455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.536531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.536547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.536652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.536667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.536802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.536817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.536901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.536917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.537002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.537018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.537161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.537177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.537259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.537275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.537404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.537419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.537498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.537513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.537583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.537598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.537672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.537687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.537787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.537802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.537887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.537901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.537984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.537999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.538105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.538137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.538238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.538254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.538406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.538423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.538504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.538520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.538610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.538624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.538719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.538735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.538816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.538830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.538971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.538986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.539078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.539092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.539175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.539189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.539279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.539294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.539371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.539386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.539466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.720 [2024-12-09 11:15:20.539481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.720 qpair failed and we were unable to recover it. 01:04:19.720 [2024-12-09 11:15:20.539615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.539632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.539779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.539794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.539870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.539885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.539962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.539976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.540059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.540074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.540269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.540286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.540356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.540373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.540438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.540452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.540559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.540573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.540641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.540660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.540740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.540755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.540841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.540856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.540928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.540943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.541016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.541031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.541133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.541149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.541223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.541238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.541320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.541334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.541479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.541495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.541574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.541589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.541676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.541693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.541776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.541791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.541881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.541895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.541971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.541986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.542132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.542147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.542223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.542238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.542382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.542398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.542548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.542563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.542652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.542668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.542777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.542792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.542869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.542883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.542958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.542973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.543053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.543067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.543143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.543157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.543230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.543243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.543339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.543353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.543488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.543502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.543636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.543656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.543739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.543753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.543903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.721 [2024-12-09 11:15:20.543918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.721 qpair failed and we were unable to recover it. 01:04:19.721 [2024-12-09 11:15:20.543999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.544015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.544107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.544121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.544217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.544232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.544329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.544344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.544421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.544435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.544508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.544525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.544666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.544681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.544776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.544792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.544869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.544884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.544972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.544987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.545136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.545150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.545240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.545255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.545339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.545353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.545423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.545438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.545506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.545521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.545597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.545611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.545718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.545735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.545819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.545834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.545976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.545992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.546064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.546079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.546219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.546238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.546335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.546350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.546438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.546453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.546536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.546552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.546624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.546639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.546727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.546743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.546836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.546852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.546942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.546957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.547048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.547067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.547151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.547167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.547258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.547273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.547345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.547361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.547436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.547452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.547527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.547542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.547612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.722 [2024-12-09 11:15:20.547629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.722 qpair failed and we were unable to recover it. 01:04:19.722 [2024-12-09 11:15:20.547711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.547728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.547880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.547894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.547963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.547978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.548059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.548074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.548141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.548155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.548248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.548263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.548335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.548350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.548431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.548445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.548525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.548541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.548620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.548636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.548719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.548734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.548813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.548828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.548981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.548997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.549075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.549090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.549229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.549244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.549315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.549329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.549412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.549428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.549512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.549526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.549612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.549627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.549716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.549733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.549811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.549826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.549897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.549912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.549984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.549999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.550080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.550095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.550227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.550242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.550323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.550338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.550414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.550430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.550513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.550528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.550666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.550682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.550757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.550772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.550850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.550865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.550935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.550950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.551019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.551034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.551159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.551179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.551325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.551340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.551409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.551424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.551499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.551513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.551594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.551610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.551825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.723 [2024-12-09 11:15:20.551841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.723 qpair failed and we were unable to recover it. 01:04:19.723 [2024-12-09 11:15:20.551937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.551952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.552088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.552103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.552179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.552194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.552267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.552282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.552433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.552447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.552524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.552539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.552617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.552631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.552708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.552723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.552810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.552825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.552903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.552918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.552988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.553072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.553169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.553262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.553354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.553453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.553544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.553639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.553731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.553817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.553975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.553990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.554070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.554085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.554221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.554236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.554318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.554333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.554414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.554429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.554504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.554519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.554595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.554610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.554699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.554714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.554858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.554873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.554952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.554966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.555048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.555063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.555138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.555153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.555291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.555307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.555394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.555409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.555489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.555507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.555592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.555608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.555687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.555703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.555838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.555852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.555925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.724 [2024-12-09 11:15:20.555939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.724 qpair failed and we were unable to recover it. 01:04:19.724 [2024-12-09 11:15:20.556076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.556091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.556164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.556179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.556257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.556272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.556378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.556393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.556479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.556494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.556568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.556583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.556678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.556695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.556773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.556788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.556887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.556902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.556984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.556999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.557084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.557099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.557187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.557202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.557337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.557352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.557425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.557441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.557537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.557552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.557686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.557702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.557792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.557807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.557884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.557899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.557972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.557987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.558058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.558072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.558140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.558155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.558230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.558245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.558319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.558334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.558403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.558418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.558488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.558503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.558599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.558614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.558700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.558716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.558792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.558807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.558899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.558914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.559042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.559058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.559132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.559148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.559242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.559256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.559346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.559361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.559440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.559455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.559531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.559545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.559687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.559705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.559849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.559864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.560003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.560018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.560089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.560104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.725 qpair failed and we were unable to recover it. 01:04:19.725 [2024-12-09 11:15:20.560243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.725 [2024-12-09 11:15:20.560258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.560346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.560361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.560432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.560447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.560615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.560630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.560722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.560738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.560815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.560830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.560917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.560933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.561065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.561080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.561166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.561181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.561266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.561281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.561358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.561372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.561455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.561470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.561559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.561574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.561750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.561765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.561903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.561918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.562051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.562066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.562148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.562163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.562252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.562267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.562348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.562363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.562449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.562465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.562602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.562617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.562702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.562719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.562807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.562823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.562959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.562974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.563055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.563070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.563213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.563228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.563371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.563386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.563490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.563505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.563576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.563592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.563692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.563709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.563797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.563813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.563888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.563904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.563981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.563996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.564072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.564087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.564169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.726 [2024-12-09 11:15:20.564183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.726 qpair failed and we were unable to recover it. 01:04:19.726 [2024-12-09 11:15:20.564333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.564348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.564431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.564448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.564520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.564535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.564685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.564701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.564778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.564792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.564875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.564890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.564960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.564975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.565054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.565072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.565157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.565172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.565308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.565324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.565463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.565478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.565572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.565587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.565673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.565689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.565773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.565788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.565932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.565947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.566044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.566059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.566144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.566158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.566241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.566256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.566338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.566353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.566422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.566437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.566505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.566519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.566604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.566619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.566974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.567048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.567196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.567225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.567384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.567407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.567498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.567518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.567615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.567631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.567774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.567789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.567876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.567895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.567971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.567986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.568061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.568076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.568159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.568175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.568263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.568277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.568350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.568365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.568447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.568462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.568540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.568555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.568631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.568650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.568809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.568824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.727 [2024-12-09 11:15:20.568903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.727 [2024-12-09 11:15:20.568918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.727 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.569062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.569076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.569161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.569176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.569316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.569334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.569422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.569436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.569524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.569539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.569629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.569648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.569723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.569738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.569813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.569827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.570040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.570140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.570234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.570329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.570446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.570545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.570635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.570729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.570825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.570917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.570994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.571009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.571089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.571104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.571178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.571193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.571263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.571278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.571415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.571430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.571505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.571520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.571618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.571633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.571719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.571734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.571806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.571821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.571894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.571909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.572003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.572017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.572173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.572197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.572325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.572349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.572431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.572449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.572543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.572559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.572650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.572666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.572746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.572762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.572846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.572862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.572943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.572958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.573030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.573045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.728 [2024-12-09 11:15:20.573120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.728 [2024-12-09 11:15:20.573134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.728 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.573209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.573224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.573309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.573324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.573464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.573479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.573565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.573583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.573663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.573679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.573749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.573765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.573912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.573928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.574967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.574982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.575058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.575072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.575165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.575180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.575256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.575272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.575398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.575414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.575488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.575503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.575577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.575592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.575670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.575686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.575777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.575792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.575873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.575888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.575961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.575977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.576049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.576066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.576145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.576161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.576262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.576286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.576375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.576392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.576484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.576500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.576574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.576588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.576671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.576685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.576777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.576791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.576867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.576882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.576988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.577003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.577140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.729 [2024-12-09 11:15:20.577156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.729 qpair failed and we were unable to recover it. 01:04:19.729 [2024-12-09 11:15:20.577243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.577257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.577338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.577353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.577434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.577448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.577543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.577558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.577626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.577651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.577727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.577742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.577817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.577832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.577916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.577930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.578034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.578124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.578219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.578318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.578403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.578503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.578602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.578690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.578799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.578895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.578997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:19.730 [2024-12-09 11:15:20.579012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.579094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.579108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.579182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.579197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.579271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.579286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.579373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.579388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.579463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.579478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:04:19.730 [2024-12-09 11:15:20.579549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.579565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.579712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.579727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.579815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.579830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.579919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.579934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.580016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.580031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:19.730 [2024-12-09 11:15:20.580108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.580124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.580272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.580287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.580362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.580379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.730 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.580465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.580481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.580557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.580573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.580664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.580680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.580756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.580772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.580863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.580878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.730 qpair failed and we were unable to recover it. 01:04:19.730 [2024-12-09 11:15:20.580952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.730 [2024-12-09 11:15:20.580966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.581096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.581111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.581207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.581223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.581298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.581313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.581408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.581423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.581496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.581511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.581581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.581596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.581735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.581750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.581830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.581845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.581917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.581932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.582005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.582021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.582102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.582117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.582187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.582202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.582351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.582366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.582453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.582468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.582540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.582555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.582653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.582668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.582740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.582756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.582855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.582870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.582954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.582971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.583940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.583954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.584021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.584036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.584117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.584134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.584206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.584221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.584295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.584310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.731 qpair failed and we were unable to recover it. 01:04:19.731 [2024-12-09 11:15:20.584380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.731 [2024-12-09 11:15:20.584395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.584467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.584482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.584557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.584572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.584703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.584718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.584793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.584808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.584882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.584897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.584967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.584983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.585122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.585137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.585209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.585225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.585299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.585314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.585398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.585413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.585484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.585499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.585635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.585655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.585742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.585756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.585842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.585857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.585936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.585951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.586091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.586106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.586193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.586208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.586283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.586298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.586370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.586384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.586470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.586484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.586562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.586578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.586663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.586679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.586748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.586764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.586837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.586853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.586936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.586951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.587018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.587033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.587103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.587118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.587252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.587267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.587345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.587360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.587493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.587508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.587580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.587595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.587691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.587707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.587785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.587801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.587885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.587900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.587973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.587988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.588063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.588078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.588150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.588167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.588307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.588322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.732 qpair failed and we were unable to recover it. 01:04:19.732 [2024-12-09 11:15:20.588409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.732 [2024-12-09 11:15:20.588424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.588502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.588517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.588598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.588613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.588682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.588697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.588775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.588790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.588860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.588875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.588953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.588968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.589049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.589064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.589138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.589152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.589222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.589237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.589318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.589333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.589408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.589422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.589557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.589572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.589642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.589661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.589800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.589815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.589899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.589914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.589985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.589999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.590068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.590083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.590171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.590186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.590253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.590269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.590343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.590358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.590438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.590453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.590535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.590550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.590627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.590642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.590715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.590730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.590883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.590898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.590971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.590986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.591055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.591069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.591151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.591166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.591252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.591266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.591355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.591370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.591444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.591459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.591543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.591558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.591641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.591662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.591758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.591774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.591846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.591862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.591949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.591965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.592051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.592067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.592147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.592166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.733 [2024-12-09 11:15:20.592245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.733 [2024-12-09 11:15:20.592261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.733 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.592334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.592350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.592438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.592453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.592550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.592565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.592641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.592660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.592748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.592764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.592847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.592862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.592957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.592972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.593057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.593072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.593146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.593161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.593235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.593250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.593350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.593366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.593446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.593463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.593536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.593552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.593620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.593636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.593803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.593820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.593899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.593915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.593994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.594008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.594083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.594098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.594171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.594186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.594317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.594332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.594409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.594424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.594500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.594514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.594592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.594607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.594683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.594699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.594771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.594787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.594881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.594896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.594991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.595006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.595094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.595109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.595186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.595202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.595289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.595304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.595387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.595402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.595502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.595517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.595600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.595615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.595688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.595704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.595771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.595787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.595918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.595932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.596070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.596085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.596228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.734 [2024-12-09 11:15:20.596243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.734 qpair failed and we were unable to recover it. 01:04:19.734 [2024-12-09 11:15:20.596330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.596347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.596497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.596513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.596642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.596662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.596742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.596757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.596839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.596854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.596938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.596953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.597047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.597061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.597134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.597148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.597229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.597244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.597323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.597337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.597487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.597502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.597651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.597667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.597752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.597767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.597844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.597859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.597939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.597954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.598041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.598056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.598193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.598208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.598280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.598295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.598386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.598401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.598473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.598488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.598579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.598594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.598673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.598689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.598764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.598779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.598854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.598869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.598946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.598961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.599096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.599111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.599254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.599269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.599362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.599377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.599455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.599470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.599619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.599654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.599739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.599754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.599834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.599850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.599950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.599966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.600112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.600128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.735 [2024-12-09 11:15:20.600204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.735 [2024-12-09 11:15:20.600220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.735 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.600359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.600375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.600461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.600477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.600562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.600578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.600666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.600682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.600760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.600776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.600876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.600894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.600972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.600989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.601069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.601085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.601173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.601200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.601284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.601299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.601366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.601381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.601452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.601467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.601551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.601567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.601642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.601667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.601747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.601763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.601917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.601934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.602082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.602098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.602171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.602187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.602258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.602273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.602346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.602361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.602442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.602457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.602539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.602555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.602628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.602650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.602805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.602834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.602928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.602943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.603094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.603109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.603196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.603211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.603288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.603303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.603391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.603406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.603480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.603495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.603582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.603596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.603730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.603746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.603824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.603838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.603927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.603941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.604078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.604093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.604232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.604246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.604323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.604338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.604413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.604428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.604494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.736 [2024-12-09 11:15:20.604509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.736 qpair failed and we were unable to recover it. 01:04:19.736 [2024-12-09 11:15:20.604640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.604659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.604809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.604824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.604896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.604910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.604990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.605005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.605081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.605096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.605176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.605191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.605322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.605339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.605422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.605438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.605526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.605541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.605616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.605631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.605748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.605763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.605854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.605869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.606005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.606021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.606096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.606114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.606187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.606202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.606283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.606297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.606381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.606396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.606536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.606551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.606628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.606642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.606721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.606736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.606873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.606888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.606961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.606976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.607062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.607159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.607250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.607336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.607437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.607538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.607626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.607720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.607814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.607919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.607995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.608010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.608098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.608128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f84d0 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.608230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.608253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.608329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.608345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.608416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.608430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.608506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.608520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.737 [2024-12-09 11:15:20.608607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.737 [2024-12-09 11:15:20.608622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.737 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.608699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.608714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.608787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.608802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.608868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.608883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.608957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.608972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.609107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.609122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.609212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.609227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.609365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.609380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.609453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.609471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.609551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.609566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.609655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.609670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.609751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.609766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.609855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.609871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.609949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.609963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.610051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.610066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.610147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.610162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.610297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.610311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.610389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.610404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.610494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.610509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.610599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.610615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.610693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.610708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.610779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.610794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.610865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.610880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.610971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.610986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.611058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.611072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.611145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.611159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.611243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.611257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.611389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.611404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.611475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.611490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.611562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.611577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.611716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.611733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.611803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.611818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.611905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.611920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.611991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.612006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.612093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.612108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.612251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.612271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.612364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.612380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.612455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.612470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.612604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.612619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.738 [2024-12-09 11:15:20.612709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.738 [2024-12-09 11:15:20.612724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.738 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.612832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.612848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.612931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.612947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.613031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.613047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.613125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.613141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.613224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.613239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.613335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.613349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.613441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.613456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.613587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.613602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.613695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.613712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.613785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.613800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.613947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.613962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.614039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.614054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.614195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.614210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.614291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.614306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.614391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.614405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.614489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.614504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.614577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.614593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.614670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.614685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.614823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.614838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.614911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.614925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.615005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.615020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.615156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.615171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.615256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.615272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.615349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.615363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.615449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.615463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.615537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.615552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.615680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.615696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.615771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.615785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.615918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.615933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.616003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.616017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.616095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.616110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.616200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.616215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.616305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.616320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 Malloc0 01:04:19.739 [2024-12-09 11:15:20.616406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.616422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.616495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.739 [2024-12-09 11:15:20.616509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.739 qpair failed and we were unable to recover it. 01:04:19.739 [2024-12-09 11:15:20.616608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.616626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.616794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.616809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.616923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.616938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.617021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.617035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.617120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.617135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.617219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.617234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b9 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:19.740 0 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.617320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.617335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.617472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.617488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.617631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 01:04:19.740 [2024-12-09 11:15:20.617649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.617736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.617751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.617880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.617895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.617971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:19.740 [2024-12-09 11:15:20.617987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dcc000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.618084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.618101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.618251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.618266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:19.740 [2024-12-09 11:15:20.618332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.618347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.618482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.618497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.618598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.618613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.618765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.618780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.618922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.618937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.619093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.619108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.619223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.619238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.619390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.619405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.619550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.619565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.619711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.619726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.619816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.619830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.619917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.619932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.620011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.620026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.620123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.620138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.620210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.620224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.620364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.620379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.620455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.620470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.620579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.620594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.620688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.620703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.620798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.620812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.620955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.620970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.621067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.621082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.740 [2024-12-09 11:15:20.621175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.740 [2024-12-09 11:15:20.621191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.740 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.621267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.621282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.621421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.621437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.621515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.621530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.621654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.621669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.621755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.621770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.621838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.621854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.621992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.622007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.622077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.622092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.622177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.622192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.622284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.622299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.622432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.622447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.622530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.622545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.622624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.622639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.622734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.622749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.622838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.622854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.623008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.623023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.623105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.623119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.623262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.623277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.623362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.623376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.623449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.623464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.623542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.623557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.623691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.623706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.623778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.623793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.623881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.623896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 [2024-12-09 11:15:20.623889] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.623996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.624010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.624093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.624106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.624183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.624197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.624281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.624295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.624457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.624471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.624547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.624561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.624649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.624664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.624750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.624765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.624898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.624913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.625049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.625064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.625149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.625164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.625304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.625318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.625418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.625433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.625509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.625525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.741 qpair failed and we were unable to recover it. 01:04:19.741 [2024-12-09 11:15:20.625602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.741 [2024-12-09 11:15:20.625617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.625702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.625717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.625792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.625807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.625921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.625945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.626035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.626050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.626185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.626200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.626268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.626283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.626369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.626384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.626518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.626533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.626619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.626634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.626716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.626732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.626812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.626827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.626909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.626924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.626999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.627013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.627091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.627107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.627201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.627216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.627303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.627322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.627407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.627422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.627509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.627524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.627601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.627616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.627705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.627721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.627867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.627882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.627955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.627970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.628101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.628116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.628194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.628209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.628352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.628367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.628436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.628451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.628603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.628618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.628713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.628728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.628924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.628940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.629081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.629096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.629190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.629205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.629357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.629372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.629464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.629479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.629554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.629568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.629715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.629731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.629818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.629833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.629977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.629991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.630090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.630105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.742 [2024-12-09 11:15:20.630187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.742 [2024-12-09 11:15:20.630201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.742 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.630337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.630352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.630440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.630455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.630539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.630555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dd4000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.630704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.630721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.630796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.630811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.630881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.630895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.630970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.630985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.631120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.631135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.631213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.631227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.631312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.631327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.631403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.631417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.631550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.631565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.631711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.631726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.631812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.631827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.631916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.631931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.632007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.632022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.632218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.632235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.632370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.632385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.632522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.632537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.632617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.632632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.632705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:19.743 [2024-12-09 11:15:20.632720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.632799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.632814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.632881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.632896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.632979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.632994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.633068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.633083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:04:19.743 [2024-12-09 11:15:20.633223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.633238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.633322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.633337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.633421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:19.743 [2024-12-09 11:15:20.633436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.633633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.633654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.633730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.633745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b9 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:19.743 0 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.633826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.633841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.633928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.633943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.634013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.634028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.634116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.634130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.634212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.634227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.634302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.634317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.634449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.743 [2024-12-09 11:15:20.634464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.743 qpair failed and we were unable to recover it. 01:04:19.743 [2024-12-09 11:15:20.634555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.634570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.634638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.634671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.634742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.634757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.634843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.634858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.634944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.634961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.635045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.635060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.635139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.635154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.635232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.635246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.635339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.635354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.635488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.635503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.635639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.635659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.635742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.635757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.635838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.635854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.635939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.635953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.636037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.636052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.636256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.636270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.636391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.636406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.636484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.636498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.636584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.636599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.636673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.636688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.636761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.636776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.636852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.636867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.636951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.636965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.637100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.637115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.637250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.637265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.637359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.637373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.637458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.637473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.637542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.637557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.637692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.637707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.637798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.637813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.637899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.637913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.638011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.638026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.638108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.638123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.638206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.638221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.638308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.744 [2024-12-09 11:15:20.638324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.744 qpair failed and we were unable to recover it. 01:04:19.744 [2024-12-09 11:15:20.638394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.638409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.638484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.638499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.638632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.638649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.638785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.638800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.638934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.638948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.639032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.639046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.639124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.639139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.639228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.639242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.639334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.639349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.639440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.639457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.639536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.639551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.639626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.639641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.639790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.639804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.639886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.639901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.639989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.640004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.640142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.640157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.640231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.640245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.640315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.640330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.640407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.640422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.640496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.640511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.640672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.640688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:19.745 [2024-12-09 11:15:20.640760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.640775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.640870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.640886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.640975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.640990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.641062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.641077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:04:19.745 [2024-12-09 11:15:20.641150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.641165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.641241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.641256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.641400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.641415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:19.745 [2024-12-09 11:15:20.641495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.641510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.641596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.641610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:19.745 [2024-12-09 11:15:20.641758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.641775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.641864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.641879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.641965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.641981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.642058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.642073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.642208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.642224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.642362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.642376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.642449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.745 [2024-12-09 11:15:20.642464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.745 qpair failed and we were unable to recover it. 01:04:19.745 [2024-12-09 11:15:20.642606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.642621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.642709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.642724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.642800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.642815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.642947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.642962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.643036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.643051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.643140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.643155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.643242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.643257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.643333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.643349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.643423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.643438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.643576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.643591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.643663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.643681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.643768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.643782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.643858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.643873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.644015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.644031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.644212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.644227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.644308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.644323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.644396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.644411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.644483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.644498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.644570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.644586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.644660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.644676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.644806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.644821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.644888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.644903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.644989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.645004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.645082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.645096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.645253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.645268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.645399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.645414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.645506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.645521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.645591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.645606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.645690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.645705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.645788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.645803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.645879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.645894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.645981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.645995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.646095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.646109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.646184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.646198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.646348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.646362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.646494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.646509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.646594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.646609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.646688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.646703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.646791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.646806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.746 qpair failed and we were unable to recover it. 01:04:19.746 [2024-12-09 11:15:20.646879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.746 [2024-12-09 11:15:20.646894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.647040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.647055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.647135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.647150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.647237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.647252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.647329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.647343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.647477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.647493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.647566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.647581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.647667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.647683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.647765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.647779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.647845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.647860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.647947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.647962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.648034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.648051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.648194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.648209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.648295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.648309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.648441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.648456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.648535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.648550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.648627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.648642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.648717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.648732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:19.747 [2024-12-09 11:15:20.648806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.648822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.648908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.648923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.648996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.649011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.649084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.649099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:04:19.747 [2024-12-09 11:15:20.649171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.649186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.649261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.649276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.649349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.649364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.649451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.649466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:19.747 [2024-12-09 11:15:20.649603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.649618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.649720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.649737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:19.747 [2024-12-09 11:15:20.649826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.649842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.650037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.650052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.650137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.650152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.650233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.650248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.650331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.650346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.650434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.650448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.650601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.650616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.650701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.650717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.650814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.747 [2024-12-09 11:15:20.650830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.747 qpair failed and we were unable to recover it. 01:04:19.747 [2024-12-09 11:15:20.650922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.650937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.651035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.651050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.651134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.651148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.651225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.651240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.651320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.651335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.651408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.651422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.651504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.651519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.651597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.651612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.651700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.651716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.651782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.651797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.651938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:19.748 [2024-12-09 11:15:20.651953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dc8000b90 with addr=10.0.0.2, port=4420 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.652132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:04:19.748 [2024-12-09 11:15:20.654667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.748 [2024-12-09 11:15:20.654754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.748 [2024-12-09 11:15:20.654778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.748 [2024-12-09 11:15:20.654790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.748 [2024-12-09 11:15:20.654800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.748 [2024-12-09 11:15:20.654834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:19.748 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:04:19.748 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:19.748 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:19.748 [2024-12-09 11:15:20.664592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.748 [2024-12-09 11:15:20.664664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.748 [2024-12-09 11:15:20.664683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.748 [2024-12-09 11:15:20.664694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.748 [2024-12-09 11:15:20.664704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.748 [2024-12-09 11:15:20.664725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:19.748 11:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2558019 01:04:19.748 [2024-12-09 11:15:20.674603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.748 [2024-12-09 11:15:20.674676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.748 [2024-12-09 11:15:20.674694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.748 [2024-12-09 11:15:20.674705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.748 [2024-12-09 11:15:20.674714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.748 [2024-12-09 11:15:20.674734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.684533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.748 [2024-12-09 11:15:20.684642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.748 [2024-12-09 11:15:20.684663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.748 [2024-12-09 11:15:20.684673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.748 [2024-12-09 11:15:20.684683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.748 [2024-12-09 11:15:20.684706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.694541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.748 [2024-12-09 11:15:20.694642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.748 [2024-12-09 11:15:20.694664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.748 [2024-12-09 11:15:20.694675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.748 [2024-12-09 11:15:20.694685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.748 [2024-12-09 11:15:20.694705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.704530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.748 [2024-12-09 11:15:20.704598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.748 [2024-12-09 11:15:20.704616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.748 [2024-12-09 11:15:20.704626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.748 [2024-12-09 11:15:20.704636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.748 [2024-12-09 11:15:20.704660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.714557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.748 [2024-12-09 11:15:20.714632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.748 [2024-12-09 11:15:20.714655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.748 [2024-12-09 11:15:20.714666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.748 [2024-12-09 11:15:20.714676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.748 [2024-12-09 11:15:20.714695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.748 qpair failed and we were unable to recover it. 01:04:19.748 [2024-12-09 11:15:20.724595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.748 [2024-12-09 11:15:20.724664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.748 [2024-12-09 11:15:20.724683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.724694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.724703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.724723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.734601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.734683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.734700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.734711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.734721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.734740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.744610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.744677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.744695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.744705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.744715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.744735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.754660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.754728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.754747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.754758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.754768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.754788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.764668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.764736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.764754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.764764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.764774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.764793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.774726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.774796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.774813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.774827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.774837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.774857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.784745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.784816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.784833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.784843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.784853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.784872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.794823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.794897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.794914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.794925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.794934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.794954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.804776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.804838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.804854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.804865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.804874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.804893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.814775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.814844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.814862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.814872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.814882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.814904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.824865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.824958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.824977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.824987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.824997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.825017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.834829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.834902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.834919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.834929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.834939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.834958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.844868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.844931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.844948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.844958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.844968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.749 [2024-12-09 11:15:20.844988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.749 qpair failed and we were unable to recover it. 01:04:19.749 [2024-12-09 11:15:20.854947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:19.749 [2024-12-09 11:15:20.855012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:19.749 [2024-12-09 11:15:20.855028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:19.749 [2024-12-09 11:15:20.855039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:19.749 [2024-12-09 11:15:20.855050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:19.750 [2024-12-09 11:15:20.855069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:19.750 qpair failed and we were unable to recover it. 01:04:20.011 [2024-12-09 11:15:20.864988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.011 [2024-12-09 11:15:20.865062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.011 [2024-12-09 11:15:20.865078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.011 [2024-12-09 11:15:20.865089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.011 [2024-12-09 11:15:20.865100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.011 [2024-12-09 11:15:20.865120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.011 qpair failed and we were unable to recover it. 01:04:20.011 [2024-12-09 11:15:20.874995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.011 [2024-12-09 11:15:20.875061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.011 [2024-12-09 11:15:20.875077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.011 [2024-12-09 11:15:20.875088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.011 [2024-12-09 11:15:20.875097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.011 [2024-12-09 11:15:20.875116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.011 qpair failed and we were unable to recover it. 01:04:20.011 [2024-12-09 11:15:20.884993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.011 [2024-12-09 11:15:20.885056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.011 [2024-12-09 11:15:20.885073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.011 [2024-12-09 11:15:20.885084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.011 [2024-12-09 11:15:20.885094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.011 [2024-12-09 11:15:20.885113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.011 qpair failed and we were unable to recover it. 01:04:20.011 [2024-12-09 11:15:20.895035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.011 [2024-12-09 11:15:20.895101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.011 [2024-12-09 11:15:20.895119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.895129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.895139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.895159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:20.905086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:20.905154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:20.905178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.905189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.905199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.905218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:20.915110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:20.915192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:20.915209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.915220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.915231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.915250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:20.925118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:20.925187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:20.925204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.925215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.925224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.925244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:20.935157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:20.935223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:20.935242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.935253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.935264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.935284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:20.945226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:20.945292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:20.945309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.945320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.945334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.945354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:20.955238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:20.955306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:20.955323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.955334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.955344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.955364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:20.965178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:20.965238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:20.965255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.965266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.965276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.965296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:20.975292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:20.975360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:20.975377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.975388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.975397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.975417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:20.985331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:20.985412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:20.985429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.985440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.985449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.985469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:20.995365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:20.995439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:20.995457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:20.995468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:20.995478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:20.995498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:21.005283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:21.005355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:21.005374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:21.005385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:21.005396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:21.005415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:21.015367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:21.015439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:21.015458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:21.015470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:21.015481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.012 [2024-12-09 11:15:21.015501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.012 qpair failed and we were unable to recover it. 01:04:20.012 [2024-12-09 11:15:21.025372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.012 [2024-12-09 11:15:21.025440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.012 [2024-12-09 11:15:21.025458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.012 [2024-12-09 11:15:21.025469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.012 [2024-12-09 11:15:21.025479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.025498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.035436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.035503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.035524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.035536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.035546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.035565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.045418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.045494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.045511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.045522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.045532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.045551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.055529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.055596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.055620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.055631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.055641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.055666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.065547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.065613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.065630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.065640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.065657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.065677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.075513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.075578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.075596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.075607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.075620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.075641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.085531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.085595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.085613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.085624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.085634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.085659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.095661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.095775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.095794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.095805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.095816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.095837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.105637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.105712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.105738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.105750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.105762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.105784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.115658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.115737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.115754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.115765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.115775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.115794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.125658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.125722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.125738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.125749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.125759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.125778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.135732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.135821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.135839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.135850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.135860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.135880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.145748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.145811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.145829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.145840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.145850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.145869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.155831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.013 [2024-12-09 11:15:21.155937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.013 [2024-12-09 11:15:21.155953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.013 [2024-12-09 11:15:21.155964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.013 [2024-12-09 11:15:21.155974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.013 [2024-12-09 11:15:21.155993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.013 qpair failed and we were unable to recover it. 01:04:20.013 [2024-12-09 11:15:21.165742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.014 [2024-12-09 11:15:21.165804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.014 [2024-12-09 11:15:21.165824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.014 [2024-12-09 11:15:21.165835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.014 [2024-12-09 11:15:21.165844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.014 [2024-12-09 11:15:21.165863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.014 qpair failed and we were unable to recover it. 01:04:20.014 [2024-12-09 11:15:21.175915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.014 [2024-12-09 11:15:21.176025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.014 [2024-12-09 11:15:21.176042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.014 [2024-12-09 11:15:21.176053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.014 [2024-12-09 11:15:21.176063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.014 [2024-12-09 11:15:21.176083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.014 qpair failed and we were unable to recover it. 01:04:20.275 [2024-12-09 11:15:21.185945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.275 [2024-12-09 11:15:21.186025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.275 [2024-12-09 11:15:21.186044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.275 [2024-12-09 11:15:21.186055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.275 [2024-12-09 11:15:21.186066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.275 [2024-12-09 11:15:21.186086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.275 qpair failed and we were unable to recover it. 01:04:20.275 [2024-12-09 11:15:21.195906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.275 [2024-12-09 11:15:21.195968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.275 [2024-12-09 11:15:21.195984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.275 [2024-12-09 11:15:21.195995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.275 [2024-12-09 11:15:21.196005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.275 [2024-12-09 11:15:21.196024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.275 qpair failed and we were unable to recover it. 01:04:20.275 [2024-12-09 11:15:21.205861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.275 [2024-12-09 11:15:21.205925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.275 [2024-12-09 11:15:21.205943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.275 [2024-12-09 11:15:21.205956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.275 [2024-12-09 11:15:21.205966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.275 [2024-12-09 11:15:21.205986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.275 qpair failed and we were unable to recover it. 01:04:20.275 [2024-12-09 11:15:21.215984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.275 [2024-12-09 11:15:21.216047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.275 [2024-12-09 11:15:21.216064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.275 [2024-12-09 11:15:21.216075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.275 [2024-12-09 11:15:21.216084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.275 [2024-12-09 11:15:21.216103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.275 qpair failed and we were unable to recover it. 01:04:20.275 [2024-12-09 11:15:21.226003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.275 [2024-12-09 11:15:21.226078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.275 [2024-12-09 11:15:21.226096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.275 [2024-12-09 11:15:21.226108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.275 [2024-12-09 11:15:21.226118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.275 [2024-12-09 11:15:21.226138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.275 qpair failed and we were unable to recover it. 01:04:20.275 [2024-12-09 11:15:21.236046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.275 [2024-12-09 11:15:21.236121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.275 [2024-12-09 11:15:21.236138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.275 [2024-12-09 11:15:21.236148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.275 [2024-12-09 11:15:21.236158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.275 [2024-12-09 11:15:21.236178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.275 qpair failed and we were unable to recover it. 01:04:20.275 [2024-12-09 11:15:21.246076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.275 [2024-12-09 11:15:21.246141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.275 [2024-12-09 11:15:21.246157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.275 [2024-12-09 11:15:21.246168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.275 [2024-12-09 11:15:21.246178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.275 [2024-12-09 11:15:21.246201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.275 qpair failed and we were unable to recover it. 01:04:20.275 [2024-12-09 11:15:21.256118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.275 [2024-12-09 11:15:21.256181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.275 [2024-12-09 11:15:21.256198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.275 [2024-12-09 11:15:21.256209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.275 [2024-12-09 11:15:21.256219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.275 [2024-12-09 11:15:21.256238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.275 qpair failed and we were unable to recover it. 01:04:20.275 [2024-12-09 11:15:21.266149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.275 [2024-12-09 11:15:21.266256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.275 [2024-12-09 11:15:21.266272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.275 [2024-12-09 11:15:21.266282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.275 [2024-12-09 11:15:21.266293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.275 [2024-12-09 11:15:21.266312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.275 qpair failed and we were unable to recover it. 01:04:20.275 [2024-12-09 11:15:21.276136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.275 [2024-12-09 11:15:21.276243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.275 [2024-12-09 11:15:21.276260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.276270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.276281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.276300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.286142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.286202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.286218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.286229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.286239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.286259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.296248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.296312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.296330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.296341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.296350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.296370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.306246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.306310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.306327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.306337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.306347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.306366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.316262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.316365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.316382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.316392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.316402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.316422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.326300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.326411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.326427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.326438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.326449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.326469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.336337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.336414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.336431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.336445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.336455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.336475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.346298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.346361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.346379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.346390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.346399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.346420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.356454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.356519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.356537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.356548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.356559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.356580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.366382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.366444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.366461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.366473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.366483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.366503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.376481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.376576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.376593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.376604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.376613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.376637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.386474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.386543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.386559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.386570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.386580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.386600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.396504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.396580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.396597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.396607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.396617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.276 [2024-12-09 11:15:21.396637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.276 qpair failed and we were unable to recover it. 01:04:20.276 [2024-12-09 11:15:21.406535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.276 [2024-12-09 11:15:21.406602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.276 [2024-12-09 11:15:21.406619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.276 [2024-12-09 11:15:21.406630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.276 [2024-12-09 11:15:21.406640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.277 [2024-12-09 11:15:21.406664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.277 qpair failed and we were unable to recover it. 01:04:20.277 [2024-12-09 11:15:21.416583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.277 [2024-12-09 11:15:21.416661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.277 [2024-12-09 11:15:21.416679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.277 [2024-12-09 11:15:21.416689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.277 [2024-12-09 11:15:21.416699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.277 [2024-12-09 11:15:21.416719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.277 qpair failed and we were unable to recover it. 01:04:20.277 [2024-12-09 11:15:21.426603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.277 [2024-12-09 11:15:21.426705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.277 [2024-12-09 11:15:21.426722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.277 [2024-12-09 11:15:21.426733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.277 [2024-12-09 11:15:21.426742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.277 [2024-12-09 11:15:21.426763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.277 qpair failed and we were unable to recover it. 01:04:20.277 [2024-12-09 11:15:21.436665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.277 [2024-12-09 11:15:21.436733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.277 [2024-12-09 11:15:21.436750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.277 [2024-12-09 11:15:21.436761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.277 [2024-12-09 11:15:21.436771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.277 [2024-12-09 11:15:21.436791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.277 qpair failed and we were unable to recover it. 01:04:20.277 [2024-12-09 11:15:21.446769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.277 [2024-12-09 11:15:21.446850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.277 [2024-12-09 11:15:21.446866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.277 [2024-12-09 11:15:21.446876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.277 [2024-12-09 11:15:21.446886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.277 [2024-12-09 11:15:21.446906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.277 qpair failed and we were unable to recover it. 01:04:20.538 [2024-12-09 11:15:21.456789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.538 [2024-12-09 11:15:21.456886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.538 [2024-12-09 11:15:21.456902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.538 [2024-12-09 11:15:21.456914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.538 [2024-12-09 11:15:21.456924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.538 [2024-12-09 11:15:21.456946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.538 qpair failed and we were unable to recover it. 01:04:20.538 [2024-12-09 11:15:21.466834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.538 [2024-12-09 11:15:21.466930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.538 [2024-12-09 11:15:21.466950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.538 [2024-12-09 11:15:21.466961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.538 [2024-12-09 11:15:21.466970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.538 [2024-12-09 11:15:21.466990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.538 qpair failed and we were unable to recover it. 01:04:20.538 [2024-12-09 11:15:21.476815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.538 [2024-12-09 11:15:21.476890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.538 [2024-12-09 11:15:21.476906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.538 [2024-12-09 11:15:21.476916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.538 [2024-12-09 11:15:21.476926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.538 [2024-12-09 11:15:21.476946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.538 qpair failed and we were unable to recover it. 01:04:20.538 [2024-12-09 11:15:21.486730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.538 [2024-12-09 11:15:21.486793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.538 [2024-12-09 11:15:21.486809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.538 [2024-12-09 11:15:21.486821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.538 [2024-12-09 11:15:21.486831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.538 [2024-12-09 11:15:21.486850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.538 qpair failed and we were unable to recover it. 01:04:20.538 [2024-12-09 11:15:21.496835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.538 [2024-12-09 11:15:21.496933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.538 [2024-12-09 11:15:21.496950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.538 [2024-12-09 11:15:21.496961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.538 [2024-12-09 11:15:21.496970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.538 [2024-12-09 11:15:21.496991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.538 qpair failed and we were unable to recover it. 01:04:20.538 [2024-12-09 11:15:21.506758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.538 [2024-12-09 11:15:21.506835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.538 [2024-12-09 11:15:21.506852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.538 [2024-12-09 11:15:21.506863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.538 [2024-12-09 11:15:21.506875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.538 [2024-12-09 11:15:21.506894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.538 qpair failed and we were unable to recover it. 01:04:20.538 [2024-12-09 11:15:21.516874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.538 [2024-12-09 11:15:21.516934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.538 [2024-12-09 11:15:21.516951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.538 [2024-12-09 11:15:21.516962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.538 [2024-12-09 11:15:21.516973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.538 [2024-12-09 11:15:21.516993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.538 qpair failed and we were unable to recover it. 01:04:20.538 [2024-12-09 11:15:21.526837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.538 [2024-12-09 11:15:21.526902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.538 [2024-12-09 11:15:21.526918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.538 [2024-12-09 11:15:21.526929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.538 [2024-12-09 11:15:21.526939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.538 [2024-12-09 11:15:21.526959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.538 qpair failed and we were unable to recover it. 01:04:20.538 [2024-12-09 11:15:21.536989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.538 [2024-12-09 11:15:21.537082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.538 [2024-12-09 11:15:21.537099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.538 [2024-12-09 11:15:21.537110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.538 [2024-12-09 11:15:21.537120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.537141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.546875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.546946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.546963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.546974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.546983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.547003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.556949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.557022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.557041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.557051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.557061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.557082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.566947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.567014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.567031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.567042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.567052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.567071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.577037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.577107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.577124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.577135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.577145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.577165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.587015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.587085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.587102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.587112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.587122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.587141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.597067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.597130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.597150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.597160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.597170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.597189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.607059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.607162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.607181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.607192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.607202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.607222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.617080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.617146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.617163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.617174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.617184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.617204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.627168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.627235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.627252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.627263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.627273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.627293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.637184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.637297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.637313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.637324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.637337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.637357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.647171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.647239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.647256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.647266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.647276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.647295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.657280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.657341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.657357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.657368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.657378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.657398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.667262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.667328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.539 [2024-12-09 11:15:21.667346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.539 [2024-12-09 11:15:21.667358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.539 [2024-12-09 11:15:21.667368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.539 [2024-12-09 11:15:21.667388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.539 qpair failed and we were unable to recover it. 01:04:20.539 [2024-12-09 11:15:21.677291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.539 [2024-12-09 11:15:21.677370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.540 [2024-12-09 11:15:21.677388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.540 [2024-12-09 11:15:21.677400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.540 [2024-12-09 11:15:21.677410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.540 [2024-12-09 11:15:21.677430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.540 qpair failed and we were unable to recover it. 01:04:20.540 [2024-12-09 11:15:21.687278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.540 [2024-12-09 11:15:21.687343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.540 [2024-12-09 11:15:21.687360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.540 [2024-12-09 11:15:21.687370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.540 [2024-12-09 11:15:21.687380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.540 [2024-12-09 11:15:21.687401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.540 qpair failed and we were unable to recover it. 01:04:20.540 [2024-12-09 11:15:21.697344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.540 [2024-12-09 11:15:21.697419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.540 [2024-12-09 11:15:21.697438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.540 [2024-12-09 11:15:21.697450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.540 [2024-12-09 11:15:21.697460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.540 [2024-12-09 11:15:21.697481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.540 qpair failed and we were unable to recover it. 01:04:20.540 [2024-12-09 11:15:21.707380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.540 [2024-12-09 11:15:21.707445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.540 [2024-12-09 11:15:21.707461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.540 [2024-12-09 11:15:21.707472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.540 [2024-12-09 11:15:21.707482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.540 [2024-12-09 11:15:21.707503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.540 qpair failed and we were unable to recover it. 01:04:20.801 [2024-12-09 11:15:21.717444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.801 [2024-12-09 11:15:21.717508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.801 [2024-12-09 11:15:21.717525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.801 [2024-12-09 11:15:21.717535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.801 [2024-12-09 11:15:21.717545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.801 [2024-12-09 11:15:21.717566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.801 qpair failed and we were unable to recover it. 01:04:20.801 [2024-12-09 11:15:21.727402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.801 [2024-12-09 11:15:21.727474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.801 [2024-12-09 11:15:21.727495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.801 [2024-12-09 11:15:21.727505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.801 [2024-12-09 11:15:21.727515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.801 [2024-12-09 11:15:21.727535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.801 qpair failed and we were unable to recover it. 01:04:20.801 [2024-12-09 11:15:21.737467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.801 [2024-12-09 11:15:21.737532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.801 [2024-12-09 11:15:21.737548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.801 [2024-12-09 11:15:21.737560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.801 [2024-12-09 11:15:21.737570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.801 [2024-12-09 11:15:21.737589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.801 qpair failed and we were unable to recover it. 01:04:20.801 [2024-12-09 11:15:21.747484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.801 [2024-12-09 11:15:21.747570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.801 [2024-12-09 11:15:21.747587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.801 [2024-12-09 11:15:21.747598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.801 [2024-12-09 11:15:21.747608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.801 [2024-12-09 11:15:21.747628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.801 qpair failed and we were unable to recover it. 01:04:20.801 [2024-12-09 11:15:21.757519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.801 [2024-12-09 11:15:21.757582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.801 [2024-12-09 11:15:21.757599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.801 [2024-12-09 11:15:21.757610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.801 [2024-12-09 11:15:21.757620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.801 [2024-12-09 11:15:21.757640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.801 qpair failed and we were unable to recover it. 01:04:20.801 [2024-12-09 11:15:21.767529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.801 [2024-12-09 11:15:21.767593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.801 [2024-12-09 11:15:21.767611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.801 [2024-12-09 11:15:21.767629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.801 [2024-12-09 11:15:21.767640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.801 [2024-12-09 11:15:21.767664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.801 qpair failed and we were unable to recover it. 01:04:20.801 [2024-12-09 11:15:21.777579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.801 [2024-12-09 11:15:21.777658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.801 [2024-12-09 11:15:21.777675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.801 [2024-12-09 11:15:21.777686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.801 [2024-12-09 11:15:21.777696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.801 [2024-12-09 11:15:21.777715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.801 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.787649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.787713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.787730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.787741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.787751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.787770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.797622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.797698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.797716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.797726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.797736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.797755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.807620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.807687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.807705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.807715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.807725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.807747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.817711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.817783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.817800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.817810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.817820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.817839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.827718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.827794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.827810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.827821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.827831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.827850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.837784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.837879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.837895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.837906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.837916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.837935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.847738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.847820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.847836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.847847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.847856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.847877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.857808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.857882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.857898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.857909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.857919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.857938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.867849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.867957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.867974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.867984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.867995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.868015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.877868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.877945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.877961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.877972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.877982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.878001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.887863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.887937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.887954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.887964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.887974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.887994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.897968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.898067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.898084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.898098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.898108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.898127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.907966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.908065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.802 [2024-12-09 11:15:21.908082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.802 [2024-12-09 11:15:21.908092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.802 [2024-12-09 11:15:21.908102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.802 [2024-12-09 11:15:21.908122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.802 qpair failed and we were unable to recover it. 01:04:20.802 [2024-12-09 11:15:21.917904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.802 [2024-12-09 11:15:21.917971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.803 [2024-12-09 11:15:21.917987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.803 [2024-12-09 11:15:21.917998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.803 [2024-12-09 11:15:21.918008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.803 [2024-12-09 11:15:21.918028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.803 qpair failed and we were unable to recover it. 01:04:20.803 [2024-12-09 11:15:21.927973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.803 [2024-12-09 11:15:21.928050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.803 [2024-12-09 11:15:21.928068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.803 [2024-12-09 11:15:21.928079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.803 [2024-12-09 11:15:21.928089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.803 [2024-12-09 11:15:21.928109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.803 qpair failed and we were unable to recover it. 01:04:20.803 [2024-12-09 11:15:21.937957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.803 [2024-12-09 11:15:21.938066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.803 [2024-12-09 11:15:21.938083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.803 [2024-12-09 11:15:21.938093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.803 [2024-12-09 11:15:21.938104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.803 [2024-12-09 11:15:21.938127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.803 qpair failed and we were unable to recover it. 01:04:20.803 [2024-12-09 11:15:21.948041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.803 [2024-12-09 11:15:21.948130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.803 [2024-12-09 11:15:21.948149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.803 [2024-12-09 11:15:21.948159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.803 [2024-12-09 11:15:21.948169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.803 [2024-12-09 11:15:21.948190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.803 qpair failed and we were unable to recover it. 01:04:20.803 [2024-12-09 11:15:21.958096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.803 [2024-12-09 11:15:21.958163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.803 [2024-12-09 11:15:21.958179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.803 [2024-12-09 11:15:21.958190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.803 [2024-12-09 11:15:21.958199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.803 [2024-12-09 11:15:21.958219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.803 qpair failed and we were unable to recover it. 01:04:20.803 [2024-12-09 11:15:21.968052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:20.803 [2024-12-09 11:15:21.968117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:20.803 [2024-12-09 11:15:21.968133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:20.803 [2024-12-09 11:15:21.968145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:20.803 [2024-12-09 11:15:21.968155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:20.803 [2024-12-09 11:15:21.968174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:20.803 qpair failed and we were unable to recover it. 01:04:21.064 [2024-12-09 11:15:21.978168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.064 [2024-12-09 11:15:21.978232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.064 [2024-12-09 11:15:21.978250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.064 [2024-12-09 11:15:21.978262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.064 [2024-12-09 11:15:21.978272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.064 [2024-12-09 11:15:21.978292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.064 qpair failed and we were unable to recover it. 01:04:21.064 [2024-12-09 11:15:21.988159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.064 [2024-12-09 11:15:21.988229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.064 [2024-12-09 11:15:21.988246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.064 [2024-12-09 11:15:21.988257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.064 [2024-12-09 11:15:21.988267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:21.988286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:21.998188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:21.998253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:21.998270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:21.998281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:21.998290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:21.998310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.008205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.008266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.008282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.008293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.008303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.008322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.018267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.018368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.018385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.018395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.018405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.018426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.028217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.028289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.028308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.028319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.028329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.028349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.038285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.038382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.038399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.038410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.038420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.038441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.048288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.048379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.048396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.048407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.048416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.048436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.058360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.058463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.058480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.058491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.058500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.058520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.068376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.068438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.068454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.068465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.068478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.068499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.078424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.078492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.078508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.078519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.078529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.078550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.088421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.088486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.088504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.088515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.088527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.088547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.098518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.098622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.098640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.098655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.098666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.098687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.108505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.108573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.108590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.108602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.108613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.108633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.118490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.065 [2024-12-09 11:15:22.118562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.065 [2024-12-09 11:15:22.118579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.065 [2024-12-09 11:15:22.118590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.065 [2024-12-09 11:15:22.118600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.065 [2024-12-09 11:15:22.118620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.065 qpair failed and we were unable to recover it. 01:04:21.065 [2024-12-09 11:15:22.128532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.128606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.128623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.128634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.128648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.128668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.066 [2024-12-09 11:15:22.138600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.138674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.138691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.138701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.138711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.138731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.066 [2024-12-09 11:15:22.148611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.148695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.148711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.148722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.148732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.148751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.066 [2024-12-09 11:15:22.158628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.158714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.158735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.158745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.158755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.158774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.066 [2024-12-09 11:15:22.168639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.168740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.168756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.168768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.168778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.168798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.066 [2024-12-09 11:15:22.178753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.178849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.178867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.178878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.178887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.178908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.066 [2024-12-09 11:15:22.188734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.188798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.188816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.188827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.188837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.188857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.066 [2024-12-09 11:15:22.198809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.198882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.198900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.198910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.198924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.198944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.066 [2024-12-09 11:15:22.208745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.208809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.208825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.208837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.208848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.208868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.066 [2024-12-09 11:15:22.218806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.218897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.218913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.218925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.218935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.218955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.066 [2024-12-09 11:15:22.228858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.066 [2024-12-09 11:15:22.228929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.066 [2024-12-09 11:15:22.228946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.066 [2024-12-09 11:15:22.228956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.066 [2024-12-09 11:15:22.228966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.066 [2024-12-09 11:15:22.228986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.066 qpair failed and we were unable to recover it. 01:04:21.327 [2024-12-09 11:15:22.238818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.327 [2024-12-09 11:15:22.238894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.327 [2024-12-09 11:15:22.238911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.327 [2024-12-09 11:15:22.238922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.327 [2024-12-09 11:15:22.238932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.327 [2024-12-09 11:15:22.238953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.327 qpair failed and we were unable to recover it. 01:04:21.327 [2024-12-09 11:15:22.248860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.327 [2024-12-09 11:15:22.248923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.327 [2024-12-09 11:15:22.248940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.327 [2024-12-09 11:15:22.248950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.327 [2024-12-09 11:15:22.248960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.327 [2024-12-09 11:15:22.248980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.327 qpair failed and we were unable to recover it. 01:04:21.327 [2024-12-09 11:15:22.258934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.327 [2024-12-09 11:15:22.259012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.327 [2024-12-09 11:15:22.259030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.327 [2024-12-09 11:15:22.259040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.327 [2024-12-09 11:15:22.259050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.327 [2024-12-09 11:15:22.259070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.327 qpair failed and we were unable to recover it. 01:04:21.327 [2024-12-09 11:15:22.268965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.327 [2024-12-09 11:15:22.269033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.327 [2024-12-09 11:15:22.269050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.327 [2024-12-09 11:15:22.269060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.327 [2024-12-09 11:15:22.269071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.327 [2024-12-09 11:15:22.269094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.327 qpair failed and we were unable to recover it. 01:04:21.327 [2024-12-09 11:15:22.278969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.327 [2024-12-09 11:15:22.279069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.327 [2024-12-09 11:15:22.279086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.327 [2024-12-09 11:15:22.279096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.327 [2024-12-09 11:15:22.279106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.327 [2024-12-09 11:15:22.279127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.327 qpair failed and we were unable to recover it. 01:04:21.327 [2024-12-09 11:15:22.288982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.327 [2024-12-09 11:15:22.289046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.327 [2024-12-09 11:15:22.289066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.327 [2024-12-09 11:15:22.289077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.327 [2024-12-09 11:15:22.289086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.289107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.298990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.299071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.299100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.299111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.299122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.299144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.309016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.309129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.309146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.309157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.309167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.309187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.319092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.319173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.319189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.319200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.319210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.319230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.329080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.329143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.329158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.329173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.329184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.329204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.339183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.339267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.339283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.339295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.339305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.339325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.349118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.349182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.349198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.349209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.349219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.349239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.359150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.359215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.359231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.359242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.359252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.359272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.369151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.369213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.369230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.369241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.369251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.369274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.379221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.379334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.379350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.379361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.379371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.379392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.389315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.389429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.389446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.389458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.389469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.389488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.399343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.399411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.399427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.399438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.399449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.399469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.409268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.409333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.409351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.409362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.409372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.409392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.419326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.328 [2024-12-09 11:15:22.419396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.328 [2024-12-09 11:15:22.419413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.328 [2024-12-09 11:15:22.419424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.328 [2024-12-09 11:15:22.419434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.328 [2024-12-09 11:15:22.419453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.328 qpair failed and we were unable to recover it. 01:04:21.328 [2024-12-09 11:15:22.429362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.329 [2024-12-09 11:15:22.429447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.329 [2024-12-09 11:15:22.429465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.329 [2024-12-09 11:15:22.429475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.329 [2024-12-09 11:15:22.429485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.329 [2024-12-09 11:15:22.429505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.329 qpair failed and we were unable to recover it. 01:04:21.329 [2024-12-09 11:15:22.439402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.329 [2024-12-09 11:15:22.439481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.329 [2024-12-09 11:15:22.439498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.329 [2024-12-09 11:15:22.439509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.329 [2024-12-09 11:15:22.439519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.329 [2024-12-09 11:15:22.439538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.329 qpair failed and we were unable to recover it. 01:04:21.329 [2024-12-09 11:15:22.449422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.329 [2024-12-09 11:15:22.449505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.329 [2024-12-09 11:15:22.449522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.329 [2024-12-09 11:15:22.449533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.329 [2024-12-09 11:15:22.449543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.329 [2024-12-09 11:15:22.449561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.329 qpair failed and we were unable to recover it. 01:04:21.329 [2024-12-09 11:15:22.459472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.329 [2024-12-09 11:15:22.459586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.329 [2024-12-09 11:15:22.459603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.329 [2024-12-09 11:15:22.459617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.329 [2024-12-09 11:15:22.459627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.329 [2024-12-09 11:15:22.459652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.329 qpair failed and we were unable to recover it. 01:04:21.329 [2024-12-09 11:15:22.469520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.329 [2024-12-09 11:15:22.469580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.329 [2024-12-09 11:15:22.469596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.329 [2024-12-09 11:15:22.469607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.329 [2024-12-09 11:15:22.469617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.329 [2024-12-09 11:15:22.469636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.329 qpair failed and we were unable to recover it. 01:04:21.329 [2024-12-09 11:15:22.479551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.329 [2024-12-09 11:15:22.479613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.329 [2024-12-09 11:15:22.479629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.329 [2024-12-09 11:15:22.479640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.329 [2024-12-09 11:15:22.479654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.329 [2024-12-09 11:15:22.479675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.329 qpair failed and we were unable to recover it. 01:04:21.329 [2024-12-09 11:15:22.489535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.329 [2024-12-09 11:15:22.489620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.329 [2024-12-09 11:15:22.489636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.329 [2024-12-09 11:15:22.489651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.329 [2024-12-09 11:15:22.489661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.329 [2024-12-09 11:15:22.489680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.329 qpair failed and we were unable to recover it. 01:04:21.329 [2024-12-09 11:15:22.499724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.329 [2024-12-09 11:15:22.499788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.329 [2024-12-09 11:15:22.499806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.329 [2024-12-09 11:15:22.499817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.329 [2024-12-09 11:15:22.499827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.329 [2024-12-09 11:15:22.499855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.329 qpair failed and we were unable to recover it. 01:04:21.590 [2024-12-09 11:15:22.509650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.590 [2024-12-09 11:15:22.509713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.590 [2024-12-09 11:15:22.509730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.590 [2024-12-09 11:15:22.509741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.590 [2024-12-09 11:15:22.509750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.590 [2024-12-09 11:15:22.509769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.590 qpair failed and we were unable to recover it. 01:04:21.590 [2024-12-09 11:15:22.519636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.590 [2024-12-09 11:15:22.519709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.590 [2024-12-09 11:15:22.519726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.590 [2024-12-09 11:15:22.519736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.590 [2024-12-09 11:15:22.519746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.590 [2024-12-09 11:15:22.519766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.590 qpair failed and we were unable to recover it. 01:04:21.590 [2024-12-09 11:15:22.529602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.590 [2024-12-09 11:15:22.529668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.590 [2024-12-09 11:15:22.529685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.590 [2024-12-09 11:15:22.529696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.590 [2024-12-09 11:15:22.529705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.590 [2024-12-09 11:15:22.529724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.590 qpair failed and we were unable to recover it. 01:04:21.590 [2024-12-09 11:15:22.539746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.590 [2024-12-09 11:15:22.539815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.590 [2024-12-09 11:15:22.539832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.590 [2024-12-09 11:15:22.539842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.590 [2024-12-09 11:15:22.539851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.590 [2024-12-09 11:15:22.539871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.590 qpair failed and we were unable to recover it. 01:04:21.590 [2024-12-09 11:15:22.549751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.590 [2024-12-09 11:15:22.549832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.590 [2024-12-09 11:15:22.549849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.590 [2024-12-09 11:15:22.549860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.590 [2024-12-09 11:15:22.549869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.590 [2024-12-09 11:15:22.549889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.590 qpair failed and we were unable to recover it. 01:04:21.590 [2024-12-09 11:15:22.559801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.590 [2024-12-09 11:15:22.559863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.590 [2024-12-09 11:15:22.559879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.590 [2024-12-09 11:15:22.559890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.590 [2024-12-09 11:15:22.559900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.590 [2024-12-09 11:15:22.559919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.590 qpair failed and we were unable to recover it. 01:04:21.590 [2024-12-09 11:15:22.569776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.590 [2024-12-09 11:15:22.569855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.590 [2024-12-09 11:15:22.569872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.590 [2024-12-09 11:15:22.569882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.590 [2024-12-09 11:15:22.569892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.590 [2024-12-09 11:15:22.569911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.590 qpair failed and we were unable to recover it. 01:04:21.590 [2024-12-09 11:15:22.579854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.590 [2024-12-09 11:15:22.579928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.590 [2024-12-09 11:15:22.579945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.590 [2024-12-09 11:15:22.579956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.590 [2024-12-09 11:15:22.579966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.590 [2024-12-09 11:15:22.579986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.590 qpair failed and we were unable to recover it. 01:04:21.590 [2024-12-09 11:15:22.589861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.590 [2024-12-09 11:15:22.589927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.590 [2024-12-09 11:15:22.589946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.590 [2024-12-09 11:15:22.589957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.590 [2024-12-09 11:15:22.589967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.590 [2024-12-09 11:15:22.589986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.590 qpair failed and we were unable to recover it. 01:04:21.590 [2024-12-09 11:15:22.599904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.590 [2024-12-09 11:15:22.599984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.600002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.600013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.600023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.600042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.609828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.609889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.609906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.609919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.609929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.609949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.619952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.620025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.620042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.620053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.620062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.620081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.629920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.629985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.630000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.630011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.630024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.630044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.639971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.640040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.640057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.640067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.640077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.640096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.649942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.650005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.650024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.650035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.650046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.650065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.660142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.660241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.660259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.660270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.660281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.660302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.670135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.670235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.670251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.670261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.670271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.670291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.680132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.680210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.680226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.680237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.680247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.680267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.690043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.690112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.690129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.690139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.690149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.690168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.700218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.700324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.700343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.700353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.700364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.700385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.710247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.710351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.710368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.710379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.710389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.710408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.720230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.720293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.720313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.720324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.591 [2024-12-09 11:15:22.720333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.591 [2024-12-09 11:15:22.720352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.591 qpair failed and we were unable to recover it. 01:04:21.591 [2024-12-09 11:15:22.730275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.591 [2024-12-09 11:15:22.730338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.591 [2024-12-09 11:15:22.730355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.591 [2024-12-09 11:15:22.730366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.592 [2024-12-09 11:15:22.730376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.592 [2024-12-09 11:15:22.730395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.592 qpair failed and we were unable to recover it. 01:04:21.592 [2024-12-09 11:15:22.740291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.592 [2024-12-09 11:15:22.740356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.592 [2024-12-09 11:15:22.740373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.592 [2024-12-09 11:15:22.740383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.592 [2024-12-09 11:15:22.740393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.592 [2024-12-09 11:15:22.740412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.592 qpair failed and we were unable to recover it. 01:04:21.592 [2024-12-09 11:15:22.750369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.592 [2024-12-09 11:15:22.750440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.592 [2024-12-09 11:15:22.750456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.592 [2024-12-09 11:15:22.750466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.592 [2024-12-09 11:15:22.750476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.592 [2024-12-09 11:15:22.750495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.592 qpair failed and we were unable to recover it. 01:04:21.592 [2024-12-09 11:15:22.760341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.592 [2024-12-09 11:15:22.760400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.592 [2024-12-09 11:15:22.760416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.592 [2024-12-09 11:15:22.760427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.592 [2024-12-09 11:15:22.760440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.592 [2024-12-09 11:15:22.760460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.592 qpair failed and we were unable to recover it. 01:04:21.852 [2024-12-09 11:15:22.770344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.852 [2024-12-09 11:15:22.770403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.852 [2024-12-09 11:15:22.770420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.852 [2024-12-09 11:15:22.770431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.852 [2024-12-09 11:15:22.770441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.852 [2024-12-09 11:15:22.770461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.852 qpair failed and we were unable to recover it. 01:04:21.852 [2024-12-09 11:15:22.780422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.852 [2024-12-09 11:15:22.780497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.852 [2024-12-09 11:15:22.780514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.852 [2024-12-09 11:15:22.780524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.852 [2024-12-09 11:15:22.780534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.852 [2024-12-09 11:15:22.780553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.852 qpair failed and we were unable to recover it. 01:04:21.852 [2024-12-09 11:15:22.790364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.852 [2024-12-09 11:15:22.790424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.852 [2024-12-09 11:15:22.790441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.852 [2024-12-09 11:15:22.790451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.852 [2024-12-09 11:15:22.790461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.852 [2024-12-09 11:15:22.790481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.852 qpair failed and we were unable to recover it. 01:04:21.852 [2024-12-09 11:15:22.800458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.852 [2024-12-09 11:15:22.800540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.852 [2024-12-09 11:15:22.800559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.852 [2024-12-09 11:15:22.800570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.852 [2024-12-09 11:15:22.800580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.852 [2024-12-09 11:15:22.800599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.852 qpair failed and we were unable to recover it. 01:04:21.852 [2024-12-09 11:15:22.810406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.852 [2024-12-09 11:15:22.810489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.852 [2024-12-09 11:15:22.810507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.852 [2024-12-09 11:15:22.810518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.852 [2024-12-09 11:15:22.810527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.852 [2024-12-09 11:15:22.810547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.852 qpair failed and we were unable to recover it. 01:04:21.852 [2024-12-09 11:15:22.820530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.852 [2024-12-09 11:15:22.820625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.852 [2024-12-09 11:15:22.820641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.852 [2024-12-09 11:15:22.820656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.852 [2024-12-09 11:15:22.820665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.852 [2024-12-09 11:15:22.820686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.852 qpair failed and we were unable to recover it. 01:04:21.852 [2024-12-09 11:15:22.830542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.852 [2024-12-09 11:15:22.830597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.852 [2024-12-09 11:15:22.830614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.852 [2024-12-09 11:15:22.830624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.852 [2024-12-09 11:15:22.830634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.852 [2024-12-09 11:15:22.830658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.852 qpair failed and we were unable to recover it. 01:04:21.852 [2024-12-09 11:15:22.840580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.852 [2024-12-09 11:15:22.840670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.840686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.840696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.840706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.840725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.850598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.850709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.850725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.850735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.850746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.850765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.860679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.860747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.860764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.860776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.860786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.860805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.870676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.870738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.870769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.870779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.870789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.870809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.880689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.880759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.880776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.880788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.880798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.880817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.890654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.890718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.890734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.890747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.890763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.890790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.900794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.900899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.900916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.900927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.900937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.900957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.910754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.910816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.910834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.910845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.910855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.910876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.920849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.920908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.920924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.920935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.920944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.920964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.930807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.930878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.930894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.930904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.930915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.930937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.940917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.940999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.941016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.941026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.941036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.941056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.950890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.951003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.951020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.951031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.951041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.951061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.960928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.961001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.961018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.961029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.961038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.853 [2024-12-09 11:15:22.961058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.853 qpair failed and we were unable to recover it. 01:04:21.853 [2024-12-09 11:15:22.970904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.853 [2024-12-09 11:15:22.971015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.853 [2024-12-09 11:15:22.971031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.853 [2024-12-09 11:15:22.971042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.853 [2024-12-09 11:15:22.971053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.854 [2024-12-09 11:15:22.971072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.854 qpair failed and we were unable to recover it. 01:04:21.854 [2024-12-09 11:15:22.980909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.854 [2024-12-09 11:15:22.980972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.854 [2024-12-09 11:15:22.980990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.854 [2024-12-09 11:15:22.981000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.854 [2024-12-09 11:15:22.981010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.854 [2024-12-09 11:15:22.981029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.854 qpair failed and we were unable to recover it. 01:04:21.854 [2024-12-09 11:15:22.990984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.854 [2024-12-09 11:15:22.991045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.854 [2024-12-09 11:15:22.991061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.854 [2024-12-09 11:15:22.991072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.854 [2024-12-09 11:15:22.991082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.854 [2024-12-09 11:15:22.991101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.854 qpair failed and we were unable to recover it. 01:04:21.854 [2024-12-09 11:15:23.001064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.854 [2024-12-09 11:15:23.001166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.854 [2024-12-09 11:15:23.001183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.854 [2024-12-09 11:15:23.001194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.854 [2024-12-09 11:15:23.001204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.854 [2024-12-09 11:15:23.001224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.854 qpair failed and we were unable to recover it. 01:04:21.854 [2024-12-09 11:15:23.011017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.854 [2024-12-09 11:15:23.011078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.854 [2024-12-09 11:15:23.011096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.854 [2024-12-09 11:15:23.011107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.854 [2024-12-09 11:15:23.011117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.854 [2024-12-09 11:15:23.011137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.854 qpair failed and we were unable to recover it. 01:04:21.854 [2024-12-09 11:15:23.021094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:21.854 [2024-12-09 11:15:23.021202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:21.854 [2024-12-09 11:15:23.021219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:21.854 [2024-12-09 11:15:23.021233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:21.854 [2024-12-09 11:15:23.021244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:21.854 [2024-12-09 11:15:23.021264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:21.854 qpair failed and we were unable to recover it. 01:04:22.114 [2024-12-09 11:15:23.031132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.114 [2024-12-09 11:15:23.031197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.114 [2024-12-09 11:15:23.031213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.114 [2024-12-09 11:15:23.031224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.114 [2024-12-09 11:15:23.031234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.114 [2024-12-09 11:15:23.031254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.114 qpair failed and we were unable to recover it. 01:04:22.114 [2024-12-09 11:15:23.041144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.114 [2024-12-09 11:15:23.041204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.114 [2024-12-09 11:15:23.041220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.114 [2024-12-09 11:15:23.041231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.114 [2024-12-09 11:15:23.041241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.114 [2024-12-09 11:15:23.041260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.114 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.051138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.051196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.051212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.051223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.051233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.051252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.061271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.061336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.061354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.061365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.061375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.061397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.071248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.071349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.071366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.071377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.071387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.071407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.081299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.081400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.081416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.081426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.081436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.081456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.091245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.091304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.091322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.091333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.091343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.091362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.101250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.101321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.101338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.101350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.101360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.101379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.111319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.111383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.111400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.111411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.111421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.111441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.121349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.121407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.121424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.121436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.121445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.121465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.131377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.131445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.131461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.131472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.131482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.131501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.141419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.141482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.141499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.141509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.141519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.141538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.151389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.151448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.151467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.151478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.151488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.151507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.161451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.161518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.161536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.161546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.161555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.161575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.171484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.115 [2024-12-09 11:15:23.171549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.115 [2024-12-09 11:15:23.171566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.115 [2024-12-09 11:15:23.171576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.115 [2024-12-09 11:15:23.171586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.115 [2024-12-09 11:15:23.171606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.115 qpair failed and we were unable to recover it. 01:04:22.115 [2024-12-09 11:15:23.181542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.181607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.181624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.181635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.181651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.181671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.116 [2024-12-09 11:15:23.191582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.191643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.191664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.191675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.191688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.191708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.116 [2024-12-09 11:15:23.201583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.201651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.201667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.201678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.201689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.201708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.116 [2024-12-09 11:15:23.211569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.211629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.211650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.211661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.211672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.211691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.116 [2024-12-09 11:15:23.221699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.221796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.221814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.221825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.221835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.221856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.116 [2024-12-09 11:15:23.231686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.231766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.231785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.231796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.231807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.231828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.116 [2024-12-09 11:15:23.241712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.241778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.241795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.241806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.241816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.241835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.116 [2024-12-09 11:15:23.251711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.251770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.251786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.251797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.251807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.251826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.116 [2024-12-09 11:15:23.261768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.261844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.261861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.261872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.261881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.261900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.116 [2024-12-09 11:15:23.271843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.271909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.271927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.271938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.271948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.271967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.116 [2024-12-09 11:15:23.281861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.116 [2024-12-09 11:15:23.281930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.116 [2024-12-09 11:15:23.281950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.116 [2024-12-09 11:15:23.281961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.116 [2024-12-09 11:15:23.281971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.116 [2024-12-09 11:15:23.281991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.116 qpair failed and we were unable to recover it. 01:04:22.378 [2024-12-09 11:15:23.291826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.378 [2024-12-09 11:15:23.291905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.378 [2024-12-09 11:15:23.291922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.378 [2024-12-09 11:15:23.291934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.378 [2024-12-09 11:15:23.291944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.378 [2024-12-09 11:15:23.291964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.378 qpair failed and we were unable to recover it. 01:04:22.378 [2024-12-09 11:15:23.301886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.378 [2024-12-09 11:15:23.301980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.378 [2024-12-09 11:15:23.302000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.378 [2024-12-09 11:15:23.302011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.378 [2024-12-09 11:15:23.302023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.378 [2024-12-09 11:15:23.302042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.378 qpair failed and we were unable to recover it. 01:04:22.378 [2024-12-09 11:15:23.311910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.378 [2024-12-09 11:15:23.311984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.378 [2024-12-09 11:15:23.312000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.378 [2024-12-09 11:15:23.312011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.378 [2024-12-09 11:15:23.312021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.378 [2024-12-09 11:15:23.312040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.378 qpair failed and we were unable to recover it. 01:04:22.378 [2024-12-09 11:15:23.321956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.378 [2024-12-09 11:15:23.322016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.378 [2024-12-09 11:15:23.322033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.378 [2024-12-09 11:15:23.322043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.378 [2024-12-09 11:15:23.322056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.378 [2024-12-09 11:15:23.322076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.378 qpair failed and we were unable to recover it. 01:04:22.378 [2024-12-09 11:15:23.331876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.378 [2024-12-09 11:15:23.331937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.378 [2024-12-09 11:15:23.331953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.378 [2024-12-09 11:15:23.331964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.378 [2024-12-09 11:15:23.331974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.331993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.342049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.342162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.342179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.342190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.342201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.342221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.351960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.352025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.352043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.352054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.352065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.352084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.362053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.362117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.362134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.362145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.362155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.362174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.372080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.372144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.372161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.372172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.372182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.372202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.382114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.382200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.382217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.382228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.382238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.382257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.392157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.392226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.392243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.392253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.392262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.392282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.402113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.402175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.402191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.402202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.402212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.402231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.412177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.412291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.412308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.412319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.412329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.412348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.422229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.422321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.422338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.422349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.422359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.422379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.432266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.432331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.432349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.432361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.432370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.432389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.442279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.442344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.442362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.442372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.442382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.442401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.452403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.452510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.452528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.452542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.452552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.452571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.462460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.379 [2024-12-09 11:15:23.462532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.379 [2024-12-09 11:15:23.462550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.379 [2024-12-09 11:15:23.462561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.379 [2024-12-09 11:15:23.462571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.379 [2024-12-09 11:15:23.462590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.379 qpair failed and we were unable to recover it. 01:04:22.379 [2024-12-09 11:15:23.472515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.380 [2024-12-09 11:15:23.472581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.380 [2024-12-09 11:15:23.472598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.380 [2024-12-09 11:15:23.472609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.380 [2024-12-09 11:15:23.472619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.380 [2024-12-09 11:15:23.472638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.380 qpair failed and we were unable to recover it. 01:04:22.380 [2024-12-09 11:15:23.482506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.380 [2024-12-09 11:15:23.482570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.380 [2024-12-09 11:15:23.482587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.380 [2024-12-09 11:15:23.482598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.380 [2024-12-09 11:15:23.482608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.380 [2024-12-09 11:15:23.482627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.380 qpair failed and we were unable to recover it. 01:04:22.380 [2024-12-09 11:15:23.492420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.380 [2024-12-09 11:15:23.492483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.380 [2024-12-09 11:15:23.492501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.380 [2024-12-09 11:15:23.492512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.380 [2024-12-09 11:15:23.492522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.380 [2024-12-09 11:15:23.492544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.380 qpair failed and we were unable to recover it. 01:04:22.380 [2024-12-09 11:15:23.502465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.380 [2024-12-09 11:15:23.502577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.380 [2024-12-09 11:15:23.502594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.380 [2024-12-09 11:15:23.502605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.380 [2024-12-09 11:15:23.502615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.380 [2024-12-09 11:15:23.502635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.380 qpair failed and we were unable to recover it. 01:04:22.380 [2024-12-09 11:15:23.512491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.380 [2024-12-09 11:15:23.512557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.380 [2024-12-09 11:15:23.512574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.380 [2024-12-09 11:15:23.512585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.380 [2024-12-09 11:15:23.512594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.380 [2024-12-09 11:15:23.512614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.380 qpair failed and we were unable to recover it. 01:04:22.380 [2024-12-09 11:15:23.522507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.380 [2024-12-09 11:15:23.522570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.380 [2024-12-09 11:15:23.522587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.380 [2024-12-09 11:15:23.522598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.380 [2024-12-09 11:15:23.522607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.380 [2024-12-09 11:15:23.522626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.380 qpair failed and we were unable to recover it. 01:04:22.380 [2024-12-09 11:15:23.532496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.380 [2024-12-09 11:15:23.532555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.380 [2024-12-09 11:15:23.532571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.380 [2024-12-09 11:15:23.532582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.380 [2024-12-09 11:15:23.532592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.380 [2024-12-09 11:15:23.532611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.380 qpair failed and we were unable to recover it. 01:04:22.380 [2024-12-09 11:15:23.542568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.380 [2024-12-09 11:15:23.542671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.380 [2024-12-09 11:15:23.542690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.380 [2024-12-09 11:15:23.542700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.380 [2024-12-09 11:15:23.542710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.380 [2024-12-09 11:15:23.542730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.380 qpair failed and we were unable to recover it. 01:04:22.645 [2024-12-09 11:15:23.552626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.645 [2024-12-09 11:15:23.552693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.645 [2024-12-09 11:15:23.552711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.645 [2024-12-09 11:15:23.552722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.645 [2024-12-09 11:15:23.552732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.645 [2024-12-09 11:15:23.552753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.645 qpair failed and we were unable to recover it. 01:04:22.645 [2024-12-09 11:15:23.562620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.645 [2024-12-09 11:15:23.562694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.645 [2024-12-09 11:15:23.562711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.645 [2024-12-09 11:15:23.562722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.645 [2024-12-09 11:15:23.562732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.645 [2024-12-09 11:15:23.562751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.645 qpair failed and we were unable to recover it. 01:04:22.645 [2024-12-09 11:15:23.572651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.645 [2024-12-09 11:15:23.572752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.645 [2024-12-09 11:15:23.572770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.645 [2024-12-09 11:15:23.572780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.645 [2024-12-09 11:15:23.572790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.645 [2024-12-09 11:15:23.572810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.645 qpair failed and we were unable to recover it. 01:04:22.645 [2024-12-09 11:15:23.582703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.645 [2024-12-09 11:15:23.582776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.645 [2024-12-09 11:15:23.582796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.645 [2024-12-09 11:15:23.582807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.645 [2024-12-09 11:15:23.582817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.645 [2024-12-09 11:15:23.582836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.645 qpair failed and we were unable to recover it. 01:04:22.645 [2024-12-09 11:15:23.592736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.645 [2024-12-09 11:15:23.592806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.645 [2024-12-09 11:15:23.592823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.645 [2024-12-09 11:15:23.592834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.645 [2024-12-09 11:15:23.592843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.645 [2024-12-09 11:15:23.592863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.645 qpair failed and we were unable to recover it. 01:04:22.645 [2024-12-09 11:15:23.602765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.645 [2024-12-09 11:15:23.602873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.645 [2024-12-09 11:15:23.602889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.645 [2024-12-09 11:15:23.602900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.645 [2024-12-09 11:15:23.602910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.645 [2024-12-09 11:15:23.602929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.645 qpair failed and we were unable to recover it. 01:04:22.645 [2024-12-09 11:15:23.612776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.645 [2024-12-09 11:15:23.612883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.645 [2024-12-09 11:15:23.612899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.645 [2024-12-09 11:15:23.612910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.645 [2024-12-09 11:15:23.612920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.645 [2024-12-09 11:15:23.612940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.645 qpair failed and we were unable to recover it. 01:04:22.645 [2024-12-09 11:15:23.622819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.645 [2024-12-09 11:15:23.622886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.645 [2024-12-09 11:15:23.622903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.645 [2024-12-09 11:15:23.622914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.645 [2024-12-09 11:15:23.622924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.645 [2024-12-09 11:15:23.622946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.645 qpair failed and we were unable to recover it. 01:04:22.645 [2024-12-09 11:15:23.632836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.645 [2024-12-09 11:15:23.632916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.645 [2024-12-09 11:15:23.632932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.645 [2024-12-09 11:15:23.632943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.645 [2024-12-09 11:15:23.632952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.645 [2024-12-09 11:15:23.632972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.645 qpair failed and we were unable to recover it. 01:04:22.645 [2024-12-09 11:15:23.642869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.645 [2024-12-09 11:15:23.642934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.645 [2024-12-09 11:15:23.642953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.645 [2024-12-09 11:15:23.642963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.645 [2024-12-09 11:15:23.642973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.642993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.652869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.652938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.652955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.652966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.652977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.652996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.662948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.663067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.663086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.663097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.663107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.663128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.672921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.672996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.673015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.673026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.673037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.673058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.682986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.683057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.683074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.683085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.683095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.683115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.692988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.693052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.693069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.693081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.693092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.693112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.703017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.703082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.703099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.703110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.703120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.703139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.713050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.713118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.713139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.713149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.713159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.713179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.723098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.723167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.723184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.723195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.723205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.723224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.733133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.733242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.733259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.733270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.733280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.733300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.743253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.743330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.743347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.743358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.743368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.743389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.753111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.753178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.753196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.753207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.753220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.753241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.763200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.763282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.763298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.763309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.763319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.763339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.773205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.646 [2024-12-09 11:15:23.773267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.646 [2024-12-09 11:15:23.773284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.646 [2024-12-09 11:15:23.773295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.646 [2024-12-09 11:15:23.773305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.646 [2024-12-09 11:15:23.773324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.646 qpair failed and we were unable to recover it. 01:04:22.646 [2024-12-09 11:15:23.783282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.647 [2024-12-09 11:15:23.783397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.647 [2024-12-09 11:15:23.783416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.647 [2024-12-09 11:15:23.783428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.647 [2024-12-09 11:15:23.783439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.647 [2024-12-09 11:15:23.783460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.647 qpair failed and we were unable to recover it. 01:04:22.647 [2024-12-09 11:15:23.793234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.647 [2024-12-09 11:15:23.793345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.647 [2024-12-09 11:15:23.793362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.647 [2024-12-09 11:15:23.793373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.647 [2024-12-09 11:15:23.793383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.647 [2024-12-09 11:15:23.793403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.647 qpair failed and we were unable to recover it. 01:04:22.647 [2024-12-09 11:15:23.803316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.647 [2024-12-09 11:15:23.803387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.647 [2024-12-09 11:15:23.803404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.647 [2024-12-09 11:15:23.803415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.647 [2024-12-09 11:15:23.803425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.647 [2024-12-09 11:15:23.803444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.647 qpair failed and we were unable to recover it. 01:04:22.647 [2024-12-09 11:15:23.813302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.647 [2024-12-09 11:15:23.813364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.647 [2024-12-09 11:15:23.813381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.647 [2024-12-09 11:15:23.813393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.647 [2024-12-09 11:15:23.813402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.647 [2024-12-09 11:15:23.813422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.647 qpair failed and we were unable to recover it. 01:04:22.908 [2024-12-09 11:15:23.823421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.908 [2024-12-09 11:15:23.823496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.908 [2024-12-09 11:15:23.823512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.908 [2024-12-09 11:15:23.823525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.908 [2024-12-09 11:15:23.823535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.908 [2024-12-09 11:15:23.823555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.908 qpair failed and we were unable to recover it. 01:04:22.908 [2024-12-09 11:15:23.833352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.908 [2024-12-09 11:15:23.833428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.908 [2024-12-09 11:15:23.833445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.908 [2024-12-09 11:15:23.833455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.908 [2024-12-09 11:15:23.833465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.908 [2024-12-09 11:15:23.833485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.908 qpair failed and we were unable to recover it. 01:04:22.908 [2024-12-09 11:15:23.843433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.908 [2024-12-09 11:15:23.843524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.908 [2024-12-09 11:15:23.843548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.908 [2024-12-09 11:15:23.843560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.908 [2024-12-09 11:15:23.843570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.908 [2024-12-09 11:15:23.843591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.908 qpair failed and we were unable to recover it. 01:04:22.908 [2024-12-09 11:15:23.853450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.908 [2024-12-09 11:15:23.853515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.908 [2024-12-09 11:15:23.853532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.908 [2024-12-09 11:15:23.853543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.908 [2024-12-09 11:15:23.853554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.908 [2024-12-09 11:15:23.853573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.908 qpair failed and we were unable to recover it. 01:04:22.908 [2024-12-09 11:15:23.863426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.908 [2024-12-09 11:15:23.863494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.908 [2024-12-09 11:15:23.863511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.908 [2024-12-09 11:15:23.863522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.908 [2024-12-09 11:15:23.863532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.908 [2024-12-09 11:15:23.863551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.908 qpair failed and we were unable to recover it. 01:04:22.908 [2024-12-09 11:15:23.873522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.908 [2024-12-09 11:15:23.873592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.873609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.873620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.873630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.873656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.883599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.883679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.883697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.883710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.883721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.883741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.893526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.893589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.893606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.893618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.893628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.893655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.903618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.903693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.903711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.903721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.903731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.903751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.913641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.913719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.913736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.913747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.913757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.913778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.923672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.923784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.923802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.923813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.923822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.923842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.933639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.933710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.933726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.933737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.933748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.933768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.943698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.943763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.943781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.943792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.943802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.943822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.953739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.953852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.953869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.953879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.953889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.953909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.963778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.963845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.963862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.963873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.963883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.963903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.973746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.973814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.973830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.973841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.973851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.973871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.983832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.983905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.983923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.983934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.983944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.983964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:23.993834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:23.993903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:23.993920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:23.993931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.909 [2024-12-09 11:15:23.993940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.909 [2024-12-09 11:15:23.993961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.909 qpair failed and we were unable to recover it. 01:04:22.909 [2024-12-09 11:15:24.003910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.909 [2024-12-09 11:15:24.003975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.909 [2024-12-09 11:15:24.003992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.909 [2024-12-09 11:15:24.004003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.910 [2024-12-09 11:15:24.004013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.910 [2024-12-09 11:15:24.004033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.910 qpair failed and we were unable to recover it. 01:04:22.910 [2024-12-09 11:15:24.013878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.910 [2024-12-09 11:15:24.013950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.910 [2024-12-09 11:15:24.013966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.910 [2024-12-09 11:15:24.013984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.910 [2024-12-09 11:15:24.013993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.910 [2024-12-09 11:15:24.014014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.910 qpair failed and we were unable to recover it. 01:04:22.910 [2024-12-09 11:15:24.023982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.910 [2024-12-09 11:15:24.024056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.910 [2024-12-09 11:15:24.024074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.910 [2024-12-09 11:15:24.024085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.910 [2024-12-09 11:15:24.024094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.910 [2024-12-09 11:15:24.024114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.910 qpair failed and we were unable to recover it. 01:04:22.910 [2024-12-09 11:15:24.033920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.910 [2024-12-09 11:15:24.033986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.910 [2024-12-09 11:15:24.034003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.910 [2024-12-09 11:15:24.034014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.910 [2024-12-09 11:15:24.034024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.910 [2024-12-09 11:15:24.034043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.910 qpair failed and we were unable to recover it. 01:04:22.910 [2024-12-09 11:15:24.043934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.910 [2024-12-09 11:15:24.043997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.910 [2024-12-09 11:15:24.044015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.910 [2024-12-09 11:15:24.044026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.910 [2024-12-09 11:15:24.044036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.910 [2024-12-09 11:15:24.044057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.910 qpair failed and we were unable to recover it. 01:04:22.910 [2024-12-09 11:15:24.053984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.910 [2024-12-09 11:15:24.054045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.910 [2024-12-09 11:15:24.054064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.910 [2024-12-09 11:15:24.054075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.910 [2024-12-09 11:15:24.054086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.910 [2024-12-09 11:15:24.054110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.910 qpair failed and we were unable to recover it. 01:04:22.910 [2024-12-09 11:15:24.064075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.910 [2024-12-09 11:15:24.064143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.910 [2024-12-09 11:15:24.064160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.910 [2024-12-09 11:15:24.064171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.910 [2024-12-09 11:15:24.064181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.910 [2024-12-09 11:15:24.064201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.910 qpair failed and we were unable to recover it. 01:04:22.910 [2024-12-09 11:15:24.074051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:22.910 [2024-12-09 11:15:24.074130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:22.910 [2024-12-09 11:15:24.074147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:22.910 [2024-12-09 11:15:24.074158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:22.910 [2024-12-09 11:15:24.074168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:22.910 [2024-12-09 11:15:24.074188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:22.910 qpair failed and we were unable to recover it. 01:04:23.173 [2024-12-09 11:15:24.084140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.173 [2024-12-09 11:15:24.084208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.173 [2024-12-09 11:15:24.084225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.173 [2024-12-09 11:15:24.084236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.173 [2024-12-09 11:15:24.084246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.173 [2024-12-09 11:15:24.084266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.173 qpair failed and we were unable to recover it. 01:04:23.173 [2024-12-09 11:15:24.094082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.173 [2024-12-09 11:15:24.094172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.173 [2024-12-09 11:15:24.094189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.173 [2024-12-09 11:15:24.094200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.173 [2024-12-09 11:15:24.094210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.173 [2024-12-09 11:15:24.094230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.104229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.104304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.104321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.104332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.104342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.104363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.114198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.114261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.114277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.114288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.114298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.114319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.124221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.124295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.124312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.124323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.124332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.124353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.134265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.134368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.134387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.134398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.134409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.134429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.144265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.144336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.144356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.144368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.144379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.144398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.154287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.154367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.154384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.154395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.154405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.154425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.164328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.164392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.164408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.164420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.164430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.164450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.174311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.174413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.174430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.174441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.174451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.174473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.184375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.184478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.184495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.184506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.184517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.184540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.194415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.194477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.194493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.194505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.194515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.194536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.204446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.204525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.204543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.204554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.204564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.204584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.214429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.214498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.214515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.214525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.214535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.214555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.174 qpair failed and we were unable to recover it. 01:04:23.174 [2024-12-09 11:15:24.224479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.174 [2024-12-09 11:15:24.224562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.174 [2024-12-09 11:15:24.224579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.174 [2024-12-09 11:15:24.224590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.174 [2024-12-09 11:15:24.224600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.174 [2024-12-09 11:15:24.224620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.234515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.234618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.234635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.234650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.234660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.234680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.244547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.244619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.244636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.244660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.244670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.244690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.254546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.254610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.254625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.254636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.254651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.254672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.264621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.264695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.264714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.264724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.264734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.264755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.274641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.274716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.274737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.274747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.274757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.274777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.284709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.284775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.284792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.284803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.284813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.284833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.294665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.294735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.294754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.294765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.294775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.294795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.304740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.304805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.304823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.304833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.304843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.304864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.314800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.314906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.314923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.314934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.314947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.314967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.324795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.324903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.324920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.324930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.324940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.324960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.334768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.334828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.334844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.334855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.334865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.334885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.175 [2024-12-09 11:15:24.344849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.175 [2024-12-09 11:15:24.344915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.175 [2024-12-09 11:15:24.344932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.175 [2024-12-09 11:15:24.344942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.175 [2024-12-09 11:15:24.344952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.175 [2024-12-09 11:15:24.344971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.175 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.354878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.354944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.354963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.354974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.354984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.355005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.364910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.365005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.365022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.365033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.365043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.365063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.374887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.374954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.374971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.374983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.374993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.375013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.384957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.385028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.385044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.385055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.385065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.385085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.394988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.395063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.395081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.395091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.395101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.395120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.405003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.405065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.405086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.405097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.405107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.405128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.415014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.415078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.415096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.415107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.415118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.415138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.425073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.425144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.425161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.425172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.425181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.425201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.435085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.435164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.435182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.435193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.435202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.435222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.445116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.445178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.445194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.445209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.445219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.445239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.455146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.455258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.455275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.455286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.455296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.455316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.465196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.465267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.465284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.465295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.465306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.465326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.438 [2024-12-09 11:15:24.475202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.438 [2024-12-09 11:15:24.475270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.438 [2024-12-09 11:15:24.475289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.438 [2024-12-09 11:15:24.475302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.438 [2024-12-09 11:15:24.475313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.438 [2024-12-09 11:15:24.475336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.438 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.485225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.485292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.485309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.485319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.485329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.485348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.495230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.495298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.495316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.495326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.495336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.495357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.505235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.505317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.505334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.505344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.505354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.505375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.515310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.515410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.515427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.515438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.515449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.515470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.525372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.525430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.525448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.525460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.525470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.525491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.535350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.535417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.535436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.535447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.535458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.535479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.545323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.545395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.545413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.545424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.545434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.545454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.555418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.555487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.555505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.555516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.555526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.555547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.565443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.565519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.565537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.565548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.565558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.565577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.575442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.575504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.575520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.575535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.575545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.575564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.585513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.585576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.585593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.585604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.585614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.585634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.595529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.595619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.595635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.595650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.595660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.595681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.439 [2024-12-09 11:15:24.605572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.439 [2024-12-09 11:15:24.605635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.439 [2024-12-09 11:15:24.605659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.439 [2024-12-09 11:15:24.605671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.439 [2024-12-09 11:15:24.605681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.439 [2024-12-09 11:15:24.605701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.439 qpair failed and we were unable to recover it. 01:04:23.700 [2024-12-09 11:15:24.615560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.700 [2024-12-09 11:15:24.615657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.700 [2024-12-09 11:15:24.615676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.700 [2024-12-09 11:15:24.615688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.700 [2024-12-09 11:15:24.615699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.700 [2024-12-09 11:15:24.615729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.700 qpair failed and we were unable to recover it. 01:04:23.700 [2024-12-09 11:15:24.625638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.700 [2024-12-09 11:15:24.625711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.700 [2024-12-09 11:15:24.625730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.700 [2024-12-09 11:15:24.625742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.700 [2024-12-09 11:15:24.625752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.700 [2024-12-09 11:15:24.625773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.700 qpair failed and we were unable to recover it. 01:04:23.700 [2024-12-09 11:15:24.635679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.700 [2024-12-09 11:15:24.635779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.700 [2024-12-09 11:15:24.635796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.700 [2024-12-09 11:15:24.635807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.700 [2024-12-09 11:15:24.635817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.700 [2024-12-09 11:15:24.635836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.700 qpair failed and we were unable to recover it. 01:04:23.700 [2024-12-09 11:15:24.645775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.700 [2024-12-09 11:15:24.645842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.700 [2024-12-09 11:15:24.645859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.700 [2024-12-09 11:15:24.645870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.645880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.645900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.655673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.655740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.655757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.655768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.655778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.655798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.665767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.665834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.665851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.665861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.665872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.665891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.675783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.675847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.675865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.675876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.675886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.675907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.685775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.685885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.685902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.685913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.685922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.685942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.695774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.695854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.695872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.695883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.695893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.695914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.705892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.706007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.706027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.706037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.706047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.706066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.715880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.715938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.715955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.715965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.715975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.715994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.725896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.725958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.725974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.725986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.725996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.726015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.735881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.735946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.735962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.735973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.735982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.736002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.745943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.746008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.746025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.746035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.746052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.746072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.755965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.756033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.756050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.756061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.756070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.756090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.765995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.766071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.766088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.766098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.766108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.766127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.701 qpair failed and we were unable to recover it. 01:04:23.701 [2024-12-09 11:15:24.776007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.701 [2024-12-09 11:15:24.776069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.701 [2024-12-09 11:15:24.776086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.701 [2024-12-09 11:15:24.776097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.701 [2024-12-09 11:15:24.776106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.701 [2024-12-09 11:15:24.776125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.702 qpair failed and we were unable to recover it. 01:04:23.702 [2024-12-09 11:15:24.786061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.702 [2024-12-09 11:15:24.786135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.702 [2024-12-09 11:15:24.786151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.702 [2024-12-09 11:15:24.786162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.702 [2024-12-09 11:15:24.786172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.702 [2024-12-09 11:15:24.786191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.702 qpair failed and we were unable to recover it. 01:04:23.702 [2024-12-09 11:15:24.796135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.702 [2024-12-09 11:15:24.796238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.702 [2024-12-09 11:15:24.796255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.702 [2024-12-09 11:15:24.796265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.702 [2024-12-09 11:15:24.796274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.702 [2024-12-09 11:15:24.796294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.702 qpair failed and we were unable to recover it. 01:04:23.702 [2024-12-09 11:15:24.806092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.702 [2024-12-09 11:15:24.806150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.702 [2024-12-09 11:15:24.806167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.702 [2024-12-09 11:15:24.806177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.702 [2024-12-09 11:15:24.806187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.702 [2024-12-09 11:15:24.806206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.702 qpair failed and we were unable to recover it. 01:04:23.702 [2024-12-09 11:15:24.816136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.702 [2024-12-09 11:15:24.816241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.702 [2024-12-09 11:15:24.816257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.702 [2024-12-09 11:15:24.816267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.702 [2024-12-09 11:15:24.816276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.702 [2024-12-09 11:15:24.816296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.702 qpair failed and we were unable to recover it. 01:04:23.702 [2024-12-09 11:15:24.826215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.702 [2024-12-09 11:15:24.826293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.702 [2024-12-09 11:15:24.826310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.702 [2024-12-09 11:15:24.826320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.702 [2024-12-09 11:15:24.826330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.702 [2024-12-09 11:15:24.826350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.702 qpair failed and we were unable to recover it. 01:04:23.702 [2024-12-09 11:15:24.836254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.702 [2024-12-09 11:15:24.836354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.702 [2024-12-09 11:15:24.836374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.702 [2024-12-09 11:15:24.836384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.702 [2024-12-09 11:15:24.836394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.702 [2024-12-09 11:15:24.836413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.702 qpair failed and we were unable to recover it. 01:04:23.702 [2024-12-09 11:15:24.846245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.702 [2024-12-09 11:15:24.846343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.702 [2024-12-09 11:15:24.846359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.702 [2024-12-09 11:15:24.846370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.702 [2024-12-09 11:15:24.846379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.702 [2024-12-09 11:15:24.846399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.702 qpair failed and we were unable to recover it. 01:04:23.702 [2024-12-09 11:15:24.856211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.702 [2024-12-09 11:15:24.856286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.702 [2024-12-09 11:15:24.856303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.702 [2024-12-09 11:15:24.856313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.702 [2024-12-09 11:15:24.856323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.702 [2024-12-09 11:15:24.856343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.702 qpair failed and we were unable to recover it. 01:04:23.702 [2024-12-09 11:15:24.866302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.702 [2024-12-09 11:15:24.866364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.702 [2024-12-09 11:15:24.866381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.702 [2024-12-09 11:15:24.866392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.702 [2024-12-09 11:15:24.866402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.702 [2024-12-09 11:15:24.866421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.702 qpair failed and we were unable to recover it. 01:04:23.964 [2024-12-09 11:15:24.876314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.964 [2024-12-09 11:15:24.876388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.964 [2024-12-09 11:15:24.876405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.964 [2024-12-09 11:15:24.876416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.964 [2024-12-09 11:15:24.876429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.964 [2024-12-09 11:15:24.876448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.964 qpair failed and we were unable to recover it. 01:04:23.964 [2024-12-09 11:15:24.886344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.964 [2024-12-09 11:15:24.886404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.964 [2024-12-09 11:15:24.886422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.964 [2024-12-09 11:15:24.886434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.964 [2024-12-09 11:15:24.886444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.964 [2024-12-09 11:15:24.886465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.964 qpair failed and we were unable to recover it. 01:04:23.964 [2024-12-09 11:15:24.896373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.964 [2024-12-09 11:15:24.896439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.964 [2024-12-09 11:15:24.896458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.964 [2024-12-09 11:15:24.896469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.964 [2024-12-09 11:15:24.896478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.964 [2024-12-09 11:15:24.896499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.964 qpair failed and we were unable to recover it. 01:04:23.964 [2024-12-09 11:15:24.906370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.964 [2024-12-09 11:15:24.906435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.964 [2024-12-09 11:15:24.906452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.964 [2024-12-09 11:15:24.906462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.964 [2024-12-09 11:15:24.906472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.964 [2024-12-09 11:15:24.906492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.964 qpair failed and we were unable to recover it. 01:04:23.964 [2024-12-09 11:15:24.916470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.964 [2024-12-09 11:15:24.916557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.964 [2024-12-09 11:15:24.916573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.964 [2024-12-09 11:15:24.916583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.964 [2024-12-09 11:15:24.916593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.964 [2024-12-09 11:15:24.916613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.964 qpair failed and we were unable to recover it. 01:04:23.964 [2024-12-09 11:15:24.926425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.964 [2024-12-09 11:15:24.926491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.964 [2024-12-09 11:15:24.926508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.964 [2024-12-09 11:15:24.926519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.964 [2024-12-09 11:15:24.926528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.964 [2024-12-09 11:15:24.926548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.964 qpair failed and we were unable to recover it. 01:04:23.964 [2024-12-09 11:15:24.936413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.964 [2024-12-09 11:15:24.936475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.964 [2024-12-09 11:15:24.936491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.964 [2024-12-09 11:15:24.936503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.964 [2024-12-09 11:15:24.936512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.964 [2024-12-09 11:15:24.936532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.964 qpair failed and we were unable to recover it. 01:04:23.964 [2024-12-09 11:15:24.946506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.964 [2024-12-09 11:15:24.946568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.964 [2024-12-09 11:15:24.946585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.964 [2024-12-09 11:15:24.946595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.964 [2024-12-09 11:15:24.946605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.964 [2024-12-09 11:15:24.946625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.964 qpair failed and we were unable to recover it. 01:04:23.964 [2024-12-09 11:15:24.956548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.964 [2024-12-09 11:15:24.956623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.964 [2024-12-09 11:15:24.956641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.964 [2024-12-09 11:15:24.956655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.964 [2024-12-09 11:15:24.956665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.964 [2024-12-09 11:15:24.956686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.964 qpair failed and we were unable to recover it. 01:04:23.964 [2024-12-09 11:15:24.966592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.964 [2024-12-09 11:15:24.966662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:24.966682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:24.966693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:24.966703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:24.966722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:24.976545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:24.976616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:24.976633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:24.976648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:24.976658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:24.976678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:24.986619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:24.986697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:24.986714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:24.986725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:24.986735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:24.986754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:24.996638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:24.996720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:24.996737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:24.996748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:24.996758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:24.996777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:25.006658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:25.006719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:25.006736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:25.006750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:25.006760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:25.006779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:25.016677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:25.016746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:25.016763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:25.016774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:25.016783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:25.016803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:25.026749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:25.026810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:25.026827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:25.026837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:25.026847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:25.026867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:25.036762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:25.036823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:25.036839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:25.036850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:25.036860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:25.036879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:25.046791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:25.046856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:25.046873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:25.046883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:25.046893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:25.046911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:25.056802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:25.056865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:25.056882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:25.056892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:25.056902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:25.056921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:25.066865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:25.066933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:25.066949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:25.066960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:25.066970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:25.066989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:25.076873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:25.076941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:25.076958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:25.076968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:25.076978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:25.076997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:25.086904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:25.086965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:25.086982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:25.086993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:25.087002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.965 [2024-12-09 11:15:25.087022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.965 qpair failed and we were unable to recover it. 01:04:23.965 [2024-12-09 11:15:25.096882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.965 [2024-12-09 11:15:25.096951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.965 [2024-12-09 11:15:25.096970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.965 [2024-12-09 11:15:25.096980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.965 [2024-12-09 11:15:25.096990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.966 [2024-12-09 11:15:25.097010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.966 qpair failed and we were unable to recover it. 01:04:23.966 [2024-12-09 11:15:25.106914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.966 [2024-12-09 11:15:25.106977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.966 [2024-12-09 11:15:25.106993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.966 [2024-12-09 11:15:25.107004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.966 [2024-12-09 11:15:25.107013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.966 [2024-12-09 11:15:25.107033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.966 qpair failed and we were unable to recover it. 01:04:23.966 [2024-12-09 11:15:25.116987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.966 [2024-12-09 11:15:25.117101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.966 [2024-12-09 11:15:25.117118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.966 [2024-12-09 11:15:25.117129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.966 [2024-12-09 11:15:25.117139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.966 [2024-12-09 11:15:25.117159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.966 qpair failed and we were unable to recover it. 01:04:23.966 [2024-12-09 11:15:25.126969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.966 [2024-12-09 11:15:25.127047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.966 [2024-12-09 11:15:25.127063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.966 [2024-12-09 11:15:25.127074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.966 [2024-12-09 11:15:25.127084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.966 [2024-12-09 11:15:25.127104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.966 qpair failed and we were unable to recover it. 01:04:23.966 [2024-12-09 11:15:25.137028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:23.966 [2024-12-09 11:15:25.137098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:23.966 [2024-12-09 11:15:25.137116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:23.966 [2024-12-09 11:15:25.137131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:23.966 [2024-12-09 11:15:25.137141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:23.966 [2024-12-09 11:15:25.137162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:23.966 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.147029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.147096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.147114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.147125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.147135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.147155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.157117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.157188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.157205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.157216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.157225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.157245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.167122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.167187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.167203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.167214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.167224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.167244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.177067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.177127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.177144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.177155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.177165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.177188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.187195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.187267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.187284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.187295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.187305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.187324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.197166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.197229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.197246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.197257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.197267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.197286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.207242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.207315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.207332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.207342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.207352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.207371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.217172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.217238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.217254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.217265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.217274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.217293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.227312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.227374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.227390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.227401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.227411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.227430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.237320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.237384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.237400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.237411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.237421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.237440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.247371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.247437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.247454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.247465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.247474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.226 [2024-12-09 11:15:25.247494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.226 qpair failed and we were unable to recover it. 01:04:24.226 [2024-12-09 11:15:25.257351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.226 [2024-12-09 11:15:25.257412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.226 [2024-12-09 11:15:25.257429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.226 [2024-12-09 11:15:25.257439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.226 [2024-12-09 11:15:25.257449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.257469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.267394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.267455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.267475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.267485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.267495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.267513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.277405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.277471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.277487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.277497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.277507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.277526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.287498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.287565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.287581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.287592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.287602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.287621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.297517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.297582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.297600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.297610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.297621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.297640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.307538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.307609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.307626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.307636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.307654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.307675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.317500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.317566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.317583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.317593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.317603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.317623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.327533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.327592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.327609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.327619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.327629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.327653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.337628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.337697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.337714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.337724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.337734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.337754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.347711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.347811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.347828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.347839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.347848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.347868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.357670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.357783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.357799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.357810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.357820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.357839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.367696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.367795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.367812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.367822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.367831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.367851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.377679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.377740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.377756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.377767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.377777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.377795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.387723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.387799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.387820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.387832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.387842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.387863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.227 [2024-12-09 11:15:25.397734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.227 [2024-12-09 11:15:25.397827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.227 [2024-12-09 11:15:25.397849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.227 [2024-12-09 11:15:25.397859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.227 [2024-12-09 11:15:25.397869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.227 [2024-12-09 11:15:25.397890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.227 qpair failed and we were unable to recover it. 01:04:24.488 [2024-12-09 11:15:25.407823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.488 [2024-12-09 11:15:25.407888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.488 [2024-12-09 11:15:25.407906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.488 [2024-12-09 11:15:25.407917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.488 [2024-12-09 11:15:25.407927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.488 [2024-12-09 11:15:25.407946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.488 qpair failed and we were unable to recover it. 01:04:24.488 [2024-12-09 11:15:25.417758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.488 [2024-12-09 11:15:25.417820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.488 [2024-12-09 11:15:25.417836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.488 [2024-12-09 11:15:25.417847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.488 [2024-12-09 11:15:25.417857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.488 [2024-12-09 11:15:25.417877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.488 qpair failed and we were unable to recover it. 01:04:24.488 [2024-12-09 11:15:25.427886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.488 [2024-12-09 11:15:25.427951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.488 [2024-12-09 11:15:25.427967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.488 [2024-12-09 11:15:25.427978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.488 [2024-12-09 11:15:25.427988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.488 [2024-12-09 11:15:25.428007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.488 qpair failed and we were unable to recover it. 01:04:24.488 [2024-12-09 11:15:25.437916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.488 [2024-12-09 11:15:25.437978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.488 [2024-12-09 11:15:25.437995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.488 [2024-12-09 11:15:25.438006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.488 [2024-12-09 11:15:25.438022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.488 [2024-12-09 11:15:25.438042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.488 qpair failed and we were unable to recover it. 01:04:24.488 [2024-12-09 11:15:25.447891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.488 [2024-12-09 11:15:25.447953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.488 [2024-12-09 11:15:25.447969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.488 [2024-12-09 11:15:25.447980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.488 [2024-12-09 11:15:25.447990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.488 [2024-12-09 11:15:25.448010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.488 qpair failed and we were unable to recover it. 01:04:24.488 [2024-12-09 11:15:25.457928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.488 [2024-12-09 11:15:25.457992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.488 [2024-12-09 11:15:25.458008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.488 [2024-12-09 11:15:25.458019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.488 [2024-12-09 11:15:25.458029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.488 [2024-12-09 11:15:25.458048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.488 qpair failed and we were unable to recover it. 01:04:24.488 [2024-12-09 11:15:25.467998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.488 [2024-12-09 11:15:25.468104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.488 [2024-12-09 11:15:25.468121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.488 [2024-12-09 11:15:25.468131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.488 [2024-12-09 11:15:25.468141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.488 [2024-12-09 11:15:25.468160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.488 qpair failed and we were unable to recover it. 01:04:24.488 [2024-12-09 11:15:25.477965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.488 [2024-12-09 11:15:25.478038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.478055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.478065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.478075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.478094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.488053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.488113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.488129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.488140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.488149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.488168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.498052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.498116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.498133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.498145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.498154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.498174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.508093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.508163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.508181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.508192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.508201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.508221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.518110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.518180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.518198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.518211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.518222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.518243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.528192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.528250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.528270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.528280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.528290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.528309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.538152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.538211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.538226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.538237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.538248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.538268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.548244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.548312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.548329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.548340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.548349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.548369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.558198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.558265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.558282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.558292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.558302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.558321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.568281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.568341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.568358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.568372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.568381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.568400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.578298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.578359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.578376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.578387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.578397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.578416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.588346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.588452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.588469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.588479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.588489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.588509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.598371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.598435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.598452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.598463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.598473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.489 [2024-12-09 11:15:25.598492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.489 qpair failed and we were unable to recover it. 01:04:24.489 [2024-12-09 11:15:25.608407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.489 [2024-12-09 11:15:25.608487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.489 [2024-12-09 11:15:25.608504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.489 [2024-12-09 11:15:25.608515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.489 [2024-12-09 11:15:25.608524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.490 [2024-12-09 11:15:25.608544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.490 qpair failed and we were unable to recover it. 01:04:24.490 [2024-12-09 11:15:25.618395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.490 [2024-12-09 11:15:25.618458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.490 [2024-12-09 11:15:25.618475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.490 [2024-12-09 11:15:25.618486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.490 [2024-12-09 11:15:25.618495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.490 [2024-12-09 11:15:25.618514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.490 qpair failed and we were unable to recover it. 01:04:24.490 [2024-12-09 11:15:25.628516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.490 [2024-12-09 11:15:25.628619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.490 [2024-12-09 11:15:25.628636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.490 [2024-12-09 11:15:25.628652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.490 [2024-12-09 11:15:25.628663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.490 [2024-12-09 11:15:25.628683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.490 qpair failed and we were unable to recover it. 01:04:24.490 [2024-12-09 11:15:25.638477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.490 [2024-12-09 11:15:25.638563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.490 [2024-12-09 11:15:25.638582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.490 [2024-12-09 11:15:25.638593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.490 [2024-12-09 11:15:25.638603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.490 [2024-12-09 11:15:25.638622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.490 qpair failed and we were unable to recover it. 01:04:24.490 [2024-12-09 11:15:25.648531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.490 [2024-12-09 11:15:25.648600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.490 [2024-12-09 11:15:25.648618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.490 [2024-12-09 11:15:25.648629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.490 [2024-12-09 11:15:25.648639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.490 [2024-12-09 11:15:25.648663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.490 qpair failed and we were unable to recover it. 01:04:24.490 [2024-12-09 11:15:25.658518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.490 [2024-12-09 11:15:25.658594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.490 [2024-12-09 11:15:25.658611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.490 [2024-12-09 11:15:25.658622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.490 [2024-12-09 11:15:25.658632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.490 [2024-12-09 11:15:25.658656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.490 qpair failed and we were unable to recover it. 01:04:24.751 [2024-12-09 11:15:25.668501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.751 [2024-12-09 11:15:25.668569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.751 [2024-12-09 11:15:25.668585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.751 [2024-12-09 11:15:25.668596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.751 [2024-12-09 11:15:25.668606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.751 [2024-12-09 11:15:25.668625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.751 qpair failed and we were unable to recover it. 01:04:24.751 [2024-12-09 11:15:25.678608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.751 [2024-12-09 11:15:25.678687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.751 [2024-12-09 11:15:25.678705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.751 [2024-12-09 11:15:25.678715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.751 [2024-12-09 11:15:25.678724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.751 [2024-12-09 11:15:25.678744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.751 qpair failed and we were unable to recover it. 01:04:24.751 [2024-12-09 11:15:25.688563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.751 [2024-12-09 11:15:25.688631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.751 [2024-12-09 11:15:25.688651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.751 [2024-12-09 11:15:25.688662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.751 [2024-12-09 11:15:25.688672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.751 [2024-12-09 11:15:25.688691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.751 qpair failed and we were unable to recover it. 01:04:24.751 [2024-12-09 11:15:25.698627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.751 [2024-12-09 11:15:25.698692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.751 [2024-12-09 11:15:25.698710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.751 [2024-12-09 11:15:25.698725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.751 [2024-12-09 11:15:25.698734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.751 [2024-12-09 11:15:25.698754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.751 qpair failed and we were unable to recover it. 01:04:24.751 [2024-12-09 11:15:25.708725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.751 [2024-12-09 11:15:25.708830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.751 [2024-12-09 11:15:25.708847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.751 [2024-12-09 11:15:25.708857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.708867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.708888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.718726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.718790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.718808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.718820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.718829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.718850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.728742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.728820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.728837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.728848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.728857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.728877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.738704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.738774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.738791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.738803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.738813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.738836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.748819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.748935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.748952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.748963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.748973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.748993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.758837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.758917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.758933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.758943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.758953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.758972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.768865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.768939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.768956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.768967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.768977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.768996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.778856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.778934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.778950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.778961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.778971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.778991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.788930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.789002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.789019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.789029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.789039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.789058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.798935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.798998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.799014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.799025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.799034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.799054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.809015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.809119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.809135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.809145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.809155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.809174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.818960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.819020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.819036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.819047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.819057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.819076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.829030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.829098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.829118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.752 [2024-12-09 11:15:25.829128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.752 [2024-12-09 11:15:25.829138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.752 [2024-12-09 11:15:25.829157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.752 qpair failed and we were unable to recover it. 01:04:24.752 [2024-12-09 11:15:25.839060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.752 [2024-12-09 11:15:25.839125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.752 [2024-12-09 11:15:25.839141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.753 [2024-12-09 11:15:25.839151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.753 [2024-12-09 11:15:25.839161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.753 [2024-12-09 11:15:25.839180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.753 qpair failed and we were unable to recover it. 01:04:24.753 [2024-12-09 11:15:25.849092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.753 [2024-12-09 11:15:25.849153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.753 [2024-12-09 11:15:25.849169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.753 [2024-12-09 11:15:25.849180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.753 [2024-12-09 11:15:25.849191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.753 [2024-12-09 11:15:25.849210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.753 qpair failed and we were unable to recover it. 01:04:24.753 [2024-12-09 11:15:25.859073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.753 [2024-12-09 11:15:25.859135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.753 [2024-12-09 11:15:25.859150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.753 [2024-12-09 11:15:25.859161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.753 [2024-12-09 11:15:25.859171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.753 [2024-12-09 11:15:25.859191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.753 qpair failed and we were unable to recover it. 01:04:24.753 [2024-12-09 11:15:25.869236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.753 [2024-12-09 11:15:25.869323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.753 [2024-12-09 11:15:25.869339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.753 [2024-12-09 11:15:25.869350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.753 [2024-12-09 11:15:25.869363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.753 [2024-12-09 11:15:25.869382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.753 qpair failed and we were unable to recover it. 01:04:24.753 [2024-12-09 11:15:25.879197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.753 [2024-12-09 11:15:25.879303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.753 [2024-12-09 11:15:25.879319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.753 [2024-12-09 11:15:25.879330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.753 [2024-12-09 11:15:25.879340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.753 [2024-12-09 11:15:25.879359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.753 qpair failed and we were unable to recover it. 01:04:24.753 [2024-12-09 11:15:25.889178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.753 [2024-12-09 11:15:25.889239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.753 [2024-12-09 11:15:25.889258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.753 [2024-12-09 11:15:25.889269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.753 [2024-12-09 11:15:25.889280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.753 [2024-12-09 11:15:25.889301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.753 qpair failed and we were unable to recover it. 01:04:24.753 [2024-12-09 11:15:25.899140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.753 [2024-12-09 11:15:25.899223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.753 [2024-12-09 11:15:25.899241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.753 [2024-12-09 11:15:25.899252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.753 [2024-12-09 11:15:25.899262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.753 [2024-12-09 11:15:25.899282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.753 qpair failed and we were unable to recover it. 01:04:24.753 [2024-12-09 11:15:25.909277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.753 [2024-12-09 11:15:25.909355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.753 [2024-12-09 11:15:25.909372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.753 [2024-12-09 11:15:25.909383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.753 [2024-12-09 11:15:25.909392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.753 [2024-12-09 11:15:25.909412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.753 qpair failed and we were unable to recover it. 01:04:24.753 [2024-12-09 11:15:25.919287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:24.753 [2024-12-09 11:15:25.919350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:24.753 [2024-12-09 11:15:25.919368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:24.753 [2024-12-09 11:15:25.919379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:24.753 [2024-12-09 11:15:25.919389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:24.753 [2024-12-09 11:15:25.919408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:24.753 qpair failed and we were unable to recover it. 01:04:25.014 [2024-12-09 11:15:25.929302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.014 [2024-12-09 11:15:25.929373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.014 [2024-12-09 11:15:25.929392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.014 [2024-12-09 11:15:25.929402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.014 [2024-12-09 11:15:25.929412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.014 [2024-12-09 11:15:25.929433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.014 qpair failed and we were unable to recover it. 01:04:25.014 [2024-12-09 11:15:25.939299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.014 [2024-12-09 11:15:25.939359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.014 [2024-12-09 11:15:25.939376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.014 [2024-12-09 11:15:25.939387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.014 [2024-12-09 11:15:25.939397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:25.939416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:25.949377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:25.949488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:25.949505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:25.949515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:25.949526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:25.949545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:25.959404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:25.959466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:25.959486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:25.959497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:25.959506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:25.959526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:25.969424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:25.969486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:25.969504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:25.969514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:25.969524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:25.969543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:25.979400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:25.979464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:25.979481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:25.979492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:25.979502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:25.979521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:25.989488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:25.989572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:25.989589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:25.989600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:25.989610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:25.989630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:25.999516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:25.999582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:25.999599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:25.999610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:25.999623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:25.999643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:26.009539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:26.009615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:26.009632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:26.009647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:26.009658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:26.009678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:26.019542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:26.019616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:26.019633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:26.019648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:26.019658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:26.019679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:26.029607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:26.029680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:26.029698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:26.029709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:26.029719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:26.029738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:26.039627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:26.039702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:26.039719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:26.039729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:26.039739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:26.039758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:26.049664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:26.049728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:26.049745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:26.049755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:26.049766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:26.049785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:26.059729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:26.059803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:26.059820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:26.059831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:26.059840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.015 [2024-12-09 11:15:26.059860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.015 qpair failed and we were unable to recover it. 01:04:25.015 [2024-12-09 11:15:26.069851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.015 [2024-12-09 11:15:26.069920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.015 [2024-12-09 11:15:26.069936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.015 [2024-12-09 11:15:26.069948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.015 [2024-12-09 11:15:26.069957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.069977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.079789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.079891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.079908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.079918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.079928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.079948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.089817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.089907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.089927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.089938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.089947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.089967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.099762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.099842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.099859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.099870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.099881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.099900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.109870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.109939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.109956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.109967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.109977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.109997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.119898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.119966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.119983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.119995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.120005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.120025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.129920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.129985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.130004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.130021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.130031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.130052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.139821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.139885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.139904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.139917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.139928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.139949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.149972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.150044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.150063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.150074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.150084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.150104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.160022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.160100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.160118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.160129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.160139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.160159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.170001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.170068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.170085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.170096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.170106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.170130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.016 [2024-12-09 11:15:26.179988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.016 [2024-12-09 11:15:26.180050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.016 [2024-12-09 11:15:26.180066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.016 [2024-12-09 11:15:26.180077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.016 [2024-12-09 11:15:26.180087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.016 [2024-12-09 11:15:26.180106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.016 qpair failed and we were unable to recover it. 01:04:25.278 [2024-12-09 11:15:26.190078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.278 [2024-12-09 11:15:26.190147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.278 [2024-12-09 11:15:26.190165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.278 [2024-12-09 11:15:26.190176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.278 [2024-12-09 11:15:26.190186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.278 [2024-12-09 11:15:26.190206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.278 qpair failed and we were unable to recover it. 01:04:25.278 [2024-12-09 11:15:26.200122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.278 [2024-12-09 11:15:26.200222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.278 [2024-12-09 11:15:26.200239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.278 [2024-12-09 11:15:26.200249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.278 [2024-12-09 11:15:26.200259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.278 [2024-12-09 11:15:26.200279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.278 qpair failed and we were unable to recover it. 01:04:25.278 [2024-12-09 11:15:26.210151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.278 [2024-12-09 11:15:26.210218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.278 [2024-12-09 11:15:26.210235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.278 [2024-12-09 11:15:26.210246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.278 [2024-12-09 11:15:26.210256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.278 [2024-12-09 11:15:26.210276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.278 qpair failed and we were unable to recover it. 01:04:25.278 [2024-12-09 11:15:26.220119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.278 [2024-12-09 11:15:26.220222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.278 [2024-12-09 11:15:26.220238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.278 [2024-12-09 11:15:26.220249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.278 [2024-12-09 11:15:26.220259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.278 [2024-12-09 11:15:26.220278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.278 qpair failed and we were unable to recover it. 01:04:25.278 [2024-12-09 11:15:26.230174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.278 [2024-12-09 11:15:26.230244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.278 [2024-12-09 11:15:26.230261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.278 [2024-12-09 11:15:26.230272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.278 [2024-12-09 11:15:26.230281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.278 [2024-12-09 11:15:26.230301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.278 qpair failed and we were unable to recover it. 01:04:25.278 [2024-12-09 11:15:26.240194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.278 [2024-12-09 11:15:26.240258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.278 [2024-12-09 11:15:26.240274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.278 [2024-12-09 11:15:26.240285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.278 [2024-12-09 11:15:26.240294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.278 [2024-12-09 11:15:26.240315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.278 qpair failed and we were unable to recover it. 01:04:25.278 [2024-12-09 11:15:26.250277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.278 [2024-12-09 11:15:26.250376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.278 [2024-12-09 11:15:26.250393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.278 [2024-12-09 11:15:26.250404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.278 [2024-12-09 11:15:26.250414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.278 [2024-12-09 11:15:26.250433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.278 qpair failed and we were unable to recover it. 01:04:25.278 [2024-12-09 11:15:26.260241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.278 [2024-12-09 11:15:26.260349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.278 [2024-12-09 11:15:26.260366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.278 [2024-12-09 11:15:26.260379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.278 [2024-12-09 11:15:26.260389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.278 [2024-12-09 11:15:26.260409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.278 qpair failed and we were unable to recover it. 01:04:25.278 [2024-12-09 11:15:26.270275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.278 [2024-12-09 11:15:26.270380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.278 [2024-12-09 11:15:26.270397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.278 [2024-12-09 11:15:26.270407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.278 [2024-12-09 11:15:26.270417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.278 [2024-12-09 11:15:26.270436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.278 qpair failed and we were unable to recover it. 01:04:25.278 [2024-12-09 11:15:26.280298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.278 [2024-12-09 11:15:26.280361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.278 [2024-12-09 11:15:26.280377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.278 [2024-12-09 11:15:26.280388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.278 [2024-12-09 11:15:26.280399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.278 [2024-12-09 11:15:26.280419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.278 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.290365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.290470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.290487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.290498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.290508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.290528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.300326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.300389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.300407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.300419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.300429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.300451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.310386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.310456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.310473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.310484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.310494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.310513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.320429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.320491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.320509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.320520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.320530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.320550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.330464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.330531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.330548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.330559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.330569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.330588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.340461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.340529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.340547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.340558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.340568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.340588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.350506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.350583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.350600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.350611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.350620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.350640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.360515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.360581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.360598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.360609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.360619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.360639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.370606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.370674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.370691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.370702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.370712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.370732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.380610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.380679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.380697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.380707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.380717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.380737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.390698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.390782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.390803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.390815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.390825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.390846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.400633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.400715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.400733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.400743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.400752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.400771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.410599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.410670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.279 [2024-12-09 11:15:26.410687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.279 [2024-12-09 11:15:26.410698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.279 [2024-12-09 11:15:26.410708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.279 [2024-12-09 11:15:26.410728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.279 qpair failed and we were unable to recover it. 01:04:25.279 [2024-12-09 11:15:26.420668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.279 [2024-12-09 11:15:26.420739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.280 [2024-12-09 11:15:26.420756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.280 [2024-12-09 11:15:26.420767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.280 [2024-12-09 11:15:26.420776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.280 [2024-12-09 11:15:26.420796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.280 qpair failed and we were unable to recover it. 01:04:25.280 [2024-12-09 11:15:26.430776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.280 [2024-12-09 11:15:26.430888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.280 [2024-12-09 11:15:26.430907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.280 [2024-12-09 11:15:26.430918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.280 [2024-12-09 11:15:26.430931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.280 [2024-12-09 11:15:26.430951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.280 qpair failed and we were unable to recover it. 01:04:25.280 [2024-12-09 11:15:26.440767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.280 [2024-12-09 11:15:26.440862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.280 [2024-12-09 11:15:26.440879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.280 [2024-12-09 11:15:26.440889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.280 [2024-12-09 11:15:26.440899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.280 [2024-12-09 11:15:26.440919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.280 qpair failed and we were unable to recover it. 01:04:25.280 [2024-12-09 11:15:26.450795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.280 [2024-12-09 11:15:26.450858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.280 [2024-12-09 11:15:26.450875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.280 [2024-12-09 11:15:26.450886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.280 [2024-12-09 11:15:26.450896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.280 [2024-12-09 11:15:26.450916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.280 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.460768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.460840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.460857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.460868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.460878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.460898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.470820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.470890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.470908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.470919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.470929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.470948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.480810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.480878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.480895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.480906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.480916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.480934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.490862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.490954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.490971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.490982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.490991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.491010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.500922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.500998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.501017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.501027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.501037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.501057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.510969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.511072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.511090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.511100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.511110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.511129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.520986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.521062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.521083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.521094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.521104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.521123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.531003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.531113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.531131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.531142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.531151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.531171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.540968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.541043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.541061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.541071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.541081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.541102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.551124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.551207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.551226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.551237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.551247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.551268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.561044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.561153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.561170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.561181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.561195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.561215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.571077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.571144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.571161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.571173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.571183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.571203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.581165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.581228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.581244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.581256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.581266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.581285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.591176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.591243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.591259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.591270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.591280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.591300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.601218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.601280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.601298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.601308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.601318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.601338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.611205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 01:04:25.541 [2024-12-09 11:15:26.611314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 01:04:25.541 [2024-12-09 11:15:26.611331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 01:04:25.541 [2024-12-09 11:15:26.611341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:04:25.541 [2024-12-09 11:15:26.611351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dc8000b90 01:04:25.541 [2024-12-09 11:15:26.611371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 [2024-12-09 11:15:26.611489] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 01:04:25.541 A controller has encountered a failure and is being reset. 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.541 qpair failed and we were unable to recover it. 01:04:25.801 Controller properly reset. 01:04:25.801 Initializing NVMe Controllers 01:04:25.801 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:04:25.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:04:25.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 01:04:25.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 01:04:25.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 01:04:25.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 01:04:25.801 Initialization complete. Launching workers. 01:04:25.801 Starting thread on core 1 01:04:25.801 Starting thread on core 2 01:04:25.801 Starting thread on core 3 01:04:25.801 Starting thread on core 0 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 01:04:25.801 01:04:25.801 real 0m11.106s 01:04:25.801 user 0m19.153s 01:04:25.801 sys 0m4.995s 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 01:04:25.801 ************************************ 01:04:25.801 END TEST nvmf_target_disconnect_tc2 01:04:25.801 ************************************ 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 01:04:25.801 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:04:25.801 rmmod nvme_tcp 01:04:25.802 rmmod nvme_fabrics 01:04:25.802 rmmod nvme_keyring 01:04:25.802 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:04:25.802 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 01:04:25.802 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 01:04:25.802 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2558566 ']' 01:04:25.802 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2558566 01:04:25.802 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2558566 ']' 01:04:25.802 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2558566 01:04:25.802 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 01:04:26.061 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:26.061 11:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2558566 01:04:26.061 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 01:04:26.061 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 01:04:26.061 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2558566' 01:04:26.061 killing process with pid 2558566 01:04:26.061 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2558566 01:04:26.061 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2558566 01:04:26.320 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:04:26.320 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:04:26.320 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:04:26.320 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 01:04:26.320 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 01:04:26.320 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:04:26.320 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 01:04:26.321 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:04:26.321 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 01:04:26.321 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:26.321 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:26.321 11:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:28.226 11:15:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:04:28.226 01:04:28.226 real 0m21.002s 01:04:28.226 user 0m48.508s 01:04:28.226 sys 0m10.567s 01:04:28.226 11:15:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:28.226 11:15:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 01:04:28.226 ************************************ 01:04:28.226 END TEST nvmf_target_disconnect 01:04:28.226 ************************************ 01:04:28.486 11:15:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:04:28.486 01:04:28.486 real 6m41.286s 01:04:28.486 user 11m52.291s 01:04:28.486 sys 2m17.976s 01:04:28.486 11:15:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:28.486 11:15:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:04:28.486 ************************************ 01:04:28.486 END TEST nvmf_host 01:04:28.486 ************************************ 01:04:28.486 11:15:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 01:04:28.486 11:15:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 01:04:28.486 11:15:29 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 01:04:28.486 11:15:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:04:28.486 11:15:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:28.486 11:15:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:04:28.486 ************************************ 01:04:28.486 START TEST nvmf_target_core_interrupt_mode 01:04:28.486 ************************************ 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 01:04:28.486 * Looking for test storage... 01:04:28.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:28.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:28.486 --rc genhtml_branch_coverage=1 01:04:28.486 --rc genhtml_function_coverage=1 01:04:28.486 --rc genhtml_legend=1 01:04:28.486 --rc geninfo_all_blocks=1 01:04:28.486 --rc geninfo_unexecuted_blocks=1 01:04:28.486 01:04:28.486 ' 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:28.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:28.486 --rc genhtml_branch_coverage=1 01:04:28.486 --rc genhtml_function_coverage=1 01:04:28.486 --rc genhtml_legend=1 01:04:28.486 --rc geninfo_all_blocks=1 01:04:28.486 --rc geninfo_unexecuted_blocks=1 01:04:28.486 01:04:28.486 ' 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:28.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:28.486 --rc genhtml_branch_coverage=1 01:04:28.486 --rc genhtml_function_coverage=1 01:04:28.486 --rc genhtml_legend=1 01:04:28.486 --rc geninfo_all_blocks=1 01:04:28.486 --rc geninfo_unexecuted_blocks=1 01:04:28.486 01:04:28.486 ' 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:28.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:28.486 --rc genhtml_branch_coverage=1 01:04:28.486 --rc genhtml_function_coverage=1 01:04:28.486 --rc genhtml_legend=1 01:04:28.486 --rc geninfo_all_blocks=1 01:04:28.486 --rc geninfo_unexecuted_blocks=1 01:04:28.486 01:04:28.486 ' 01:04:28.486 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:04:28.746 ************************************ 01:04:28.746 START TEST nvmf_abort 01:04:28.746 ************************************ 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 01:04:28.746 * Looking for test storage... 01:04:28.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:28.746 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:29.006 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:29.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:29.006 --rc genhtml_branch_coverage=1 01:04:29.007 --rc genhtml_function_coverage=1 01:04:29.007 --rc genhtml_legend=1 01:04:29.007 --rc geninfo_all_blocks=1 01:04:29.007 --rc geninfo_unexecuted_blocks=1 01:04:29.007 01:04:29.007 ' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:29.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:29.007 --rc genhtml_branch_coverage=1 01:04:29.007 --rc genhtml_function_coverage=1 01:04:29.007 --rc genhtml_legend=1 01:04:29.007 --rc geninfo_all_blocks=1 01:04:29.007 --rc geninfo_unexecuted_blocks=1 01:04:29.007 01:04:29.007 ' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:29.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:29.007 --rc genhtml_branch_coverage=1 01:04:29.007 --rc genhtml_function_coverage=1 01:04:29.007 --rc genhtml_legend=1 01:04:29.007 --rc geninfo_all_blocks=1 01:04:29.007 --rc geninfo_unexecuted_blocks=1 01:04:29.007 01:04:29.007 ' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:29.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:29.007 --rc genhtml_branch_coverage=1 01:04:29.007 --rc genhtml_function_coverage=1 01:04:29.007 --rc genhtml_legend=1 01:04:29.007 --rc geninfo_all_blocks=1 01:04:29.007 --rc geninfo_unexecuted_blocks=1 01:04:29.007 01:04:29.007 ' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 01:04:29.007 11:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:04:35.600 Found 0000:af:00.0 (0x8086 - 0x159b) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:04:35.600 Found 0000:af:00.1 (0x8086 - 0x159b) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:04:35.600 Found net devices under 0000:af:00.0: cvl_0_0 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:04:35.600 Found net devices under 0000:af:00.1: cvl_0_1 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:04:35.600 11:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:04:35.600 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:04:35.600 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:04:35.600 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:04:35.600 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:04:35.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:35.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 01:04:35.600 01:04:35.600 --- 10.0.0.2 ping statistics --- 01:04:35.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:35.601 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:04:35.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:35.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 01:04:35.601 01:04:35.601 --- 10.0.0.1 ping statistics --- 01:04:35.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:35.601 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2562604 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2562604 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2562604 ']' 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:35.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.601 [2024-12-09 11:15:36.191955] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:04:35.601 [2024-12-09 11:15:36.193432] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:04:35.601 [2024-12-09 11:15:36.193484] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:35.601 [2024-12-09 11:15:36.296072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:04:35.601 [2024-12-09 11:15:36.340203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:35.601 [2024-12-09 11:15:36.340243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:35.601 [2024-12-09 11:15:36.340253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:35.601 [2024-12-09 11:15:36.340264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:35.601 [2024-12-09 11:15:36.340272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:35.601 [2024-12-09 11:15:36.341532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:04:35.601 [2024-12-09 11:15:36.341551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:04:35.601 [2024-12-09 11:15:36.341553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:04:35.601 [2024-12-09 11:15:36.411508] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:04:35.601 [2024-12-09 11:15:36.411607] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:04:35.601 [2024-12-09 11:15:36.411701] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:04:35.601 [2024-12-09 11:15:36.411822] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.601 [2024-12-09 11:15:36.494362] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.601 Malloc0 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.601 Delay0 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.601 [2024-12-09 11:15:36.566248] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:35.601 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 01:04:35.601 [2024-12-09 11:15:36.704807] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:04:38.141 Initializing NVMe Controllers 01:04:38.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 01:04:38.141 controller IO queue size 128 less than required 01:04:38.141 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 01:04:38.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 01:04:38.141 Initialization complete. Launching workers. 01:04:38.141 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 23404 01:04:38.141 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23461, failed to submit 66 01:04:38.141 success 23404, unsuccessful 57, failed 0 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:04:38.141 rmmod nvme_tcp 01:04:38.141 rmmod nvme_fabrics 01:04:38.141 rmmod nvme_keyring 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2562604 ']' 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2562604 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2562604 ']' 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2562604 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2562604 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2562604' 01:04:38.141 killing process with pid 2562604 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2562604 01:04:38.141 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2562604 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:38.141 11:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:04:40.681 01:04:40.681 real 0m11.532s 01:04:40.681 user 0m10.804s 01:04:40.681 sys 0m5.929s 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:04:40.681 ************************************ 01:04:40.681 END TEST nvmf_abort 01:04:40.681 ************************************ 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:04:40.681 ************************************ 01:04:40.681 START TEST nvmf_ns_hotplug_stress 01:04:40.681 ************************************ 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 01:04:40.681 * Looking for test storage... 01:04:40.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:40.681 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:40.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:40.682 --rc genhtml_branch_coverage=1 01:04:40.682 --rc genhtml_function_coverage=1 01:04:40.682 --rc genhtml_legend=1 01:04:40.682 --rc geninfo_all_blocks=1 01:04:40.682 --rc geninfo_unexecuted_blocks=1 01:04:40.682 01:04:40.682 ' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:40.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:40.682 --rc genhtml_branch_coverage=1 01:04:40.682 --rc genhtml_function_coverage=1 01:04:40.682 --rc genhtml_legend=1 01:04:40.682 --rc geninfo_all_blocks=1 01:04:40.682 --rc geninfo_unexecuted_blocks=1 01:04:40.682 01:04:40.682 ' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:40.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:40.682 --rc genhtml_branch_coverage=1 01:04:40.682 --rc genhtml_function_coverage=1 01:04:40.682 --rc genhtml_legend=1 01:04:40.682 --rc geninfo_all_blocks=1 01:04:40.682 --rc geninfo_unexecuted_blocks=1 01:04:40.682 01:04:40.682 ' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:40.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:40.682 --rc genhtml_branch_coverage=1 01:04:40.682 --rc genhtml_function_coverage=1 01:04:40.682 --rc genhtml_legend=1 01:04:40.682 --rc geninfo_all_blocks=1 01:04:40.682 --rc geninfo_unexecuted_blocks=1 01:04:40.682 01:04:40.682 ' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:40.682 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 01:04:40.683 11:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:04:47.259 Found 0000:af:00.0 (0x8086 - 0x159b) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:04:47.259 Found 0000:af:00.1 (0x8086 - 0x159b) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:04:47.259 Found net devices under 0000:af:00.0: cvl_0_0 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:04:47.259 Found net devices under 0000:af:00.1: cvl_0_1 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:04:47.259 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:04:47.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:47.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 01:04:47.260 01:04:47.260 --- 10.0.0.2 ping statistics --- 01:04:47.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:47.260 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:04:47.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:47.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 01:04:47.260 01:04:47.260 --- 10.0.0.1 ping statistics --- 01:04:47.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:47.260 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2566090 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2566090 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2566090 ']' 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:47.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:04:47.260 [2024-12-09 11:15:47.599413] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:04:47.260 [2024-12-09 11:15:47.600924] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:04:47.260 [2024-12-09 11:15:47.600982] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:47.260 [2024-12-09 11:15:47.706295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:04:47.260 [2024-12-09 11:15:47.747167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:47.260 [2024-12-09 11:15:47.747216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:47.260 [2024-12-09 11:15:47.747226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:47.260 [2024-12-09 11:15:47.747235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:47.260 [2024-12-09 11:15:47.747242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:47.260 [2024-12-09 11:15:47.748490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:04:47.260 [2024-12-09 11:15:47.748577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:04:47.260 [2024-12-09 11:15:47.748579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:04:47.260 [2024-12-09 11:15:47.818288] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:04:47.260 [2024-12-09 11:15:47.818304] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:04:47.260 [2024-12-09 11:15:47.818400] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:04:47.260 [2024-12-09 11:15:47.818519] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 01:04:47.260 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:04:47.260 [2024-12-09 11:15:48.065407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:47.260 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:04:47.260 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:04:47.520 [2024-12-09 11:15:48.525727] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:04:47.520 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:04:47.779 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 01:04:47.779 Malloc0 01:04:47.779 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:04:48.039 Delay0 01:04:48.039 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:48.299 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 01:04:48.558 NULL1 01:04:48.558 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 01:04:48.817 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2566393 01:04:48.817 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:48.817 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:48.817 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 01:04:48.817 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:49.077 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 01:04:49.077 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 01:04:49.335 true 01:04:49.595 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:49.595 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:49.595 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:49.854 11:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 01:04:49.854 11:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 01:04:50.422 true 01:04:50.422 11:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:50.422 11:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:50.681 11:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:50.941 11:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 01:04:50.941 11:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 01:04:51.200 true 01:04:51.200 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:51.200 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:51.460 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:51.719 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 01:04:51.719 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 01:04:51.979 true 01:04:51.979 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:51.979 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:52.238 11:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:52.497 11:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 01:04:52.497 11:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 01:04:52.757 true 01:04:52.757 11:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:52.757 11:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:53.016 11:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:53.275 11:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 01:04:53.275 11:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 01:04:53.535 true 01:04:53.535 11:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:53.535 11:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:53.795 11:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:54.056 11:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 01:04:54.056 11:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 01:04:54.318 true 01:04:54.318 11:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:54.318 11:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:54.577 11:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:54.837 11:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 01:04:54.837 11:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 01:04:55.096 true 01:04:55.096 11:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:55.096 11:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:55.355 11:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:55.614 11:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 01:04:55.614 11:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 01:04:55.872 true 01:04:55.873 11:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:55.873 11:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:56.440 11:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:56.700 11:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 01:04:56.700 11:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 01:04:56.959 true 01:04:56.959 11:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:56.959 11:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:57.218 11:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:57.480 11:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 01:04:57.480 11:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 01:04:57.739 true 01:04:57.739 11:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:57.739 11:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:57.999 11:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:58.568 11:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 01:04:58.568 11:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 01:04:58.568 true 01:04:58.827 11:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:58.827 11:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:59.086 11:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:04:59.345 11:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 01:04:59.345 11:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 01:04:59.604 true 01:04:59.604 11:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:04:59.604 11:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:04:59.862 11:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:00.121 11:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 01:05:00.121 11:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 01:05:00.380 true 01:05:00.380 11:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:00.380 11:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:00.639 11:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:00.897 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 01:05:00.897 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 01:05:01.156 true 01:05:01.415 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:01.415 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:01.415 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:01.674 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 01:05:01.674 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 01:05:01.933 true 01:05:01.933 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:01.933 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:02.193 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:02.763 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 01:05:02.763 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 01:05:02.763 true 01:05:03.022 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:03.022 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:03.281 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:03.542 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 01:05:03.542 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 01:05:03.542 true 01:05:03.801 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:03.801 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:03.801 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:04.060 11:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 01:05:04.060 11:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 01:05:04.629 true 01:05:04.629 11:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:04.629 11:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:04.888 11:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:04.888 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 01:05:04.888 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 01:05:05.155 true 01:05:05.156 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:05.156 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:05.414 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:05.673 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 01:05:05.673 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 01:05:05.933 true 01:05:06.191 11:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:06.191 11:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:06.191 11:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:06.759 11:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 01:05:06.759 11:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 01:05:06.759 true 01:05:06.759 11:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:06.760 11:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:07.019 11:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:07.587 11:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 01:05:07.587 11:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 01:05:07.587 true 01:05:07.846 11:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:07.846 11:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:08.106 11:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:08.366 11:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 01:05:08.366 11:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 01:05:08.625 true 01:05:08.625 11:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:08.625 11:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:08.900 11:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:09.160 11:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 01:05:09.160 11:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 01:05:09.419 true 01:05:09.419 11:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:09.419 11:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:09.678 11:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:09.937 11:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 01:05:09.937 11:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 01:05:10.196 true 01:05:10.196 11:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:10.196 11:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:10.455 11:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:11.024 11:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 01:05:11.024 11:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 01:05:11.024 true 01:05:11.024 11:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:11.024 11:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:11.284 11:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:11.543 11:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 01:05:11.543 11:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 01:05:11.802 true 01:05:11.802 11:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:11.802 11:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:12.062 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:12.321 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 01:05:12.321 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 01:05:12.890 true 01:05:12.890 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:12.890 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:13.150 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:13.409 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 01:05:13.409 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 01:05:13.669 true 01:05:13.669 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:13.669 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:13.929 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:14.187 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 01:05:14.187 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 01:05:14.447 true 01:05:14.447 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:14.447 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:14.706 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:15.275 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 01:05:15.275 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 01:05:15.275 true 01:05:15.535 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:15.535 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:15.794 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:16.054 11:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 01:05:16.054 11:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 01:05:16.314 true 01:05:16.314 11:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:16.314 11:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:16.573 11:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:16.833 11:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 01:05:16.833 11:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 01:05:17.092 true 01:05:17.351 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:17.351 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:17.610 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:17.869 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 01:05:17.869 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 01:05:18.128 true 01:05:18.128 11:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:18.128 11:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:18.387 11:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:18.645 11:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 01:05:18.645 11:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 01:05:18.904 true 01:05:18.904 11:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:18.904 11:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:19.164 Initializing NVMe Controllers 01:05:19.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:05:19.164 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 01:05:19.164 Controller IO queue size 128, less than required. 01:05:19.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:05:19.164 WARNING: Some requested NVMe devices were skipped 01:05:19.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:05:19.164 Initialization complete. Launching workers. 01:05:19.164 ======================================================== 01:05:19.164 Latency(us) 01:05:19.164 Device Information : IOPS MiB/s Average min max 01:05:19.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27933.83 13.64 4581.93 1858.01 9497.79 01:05:19.164 ======================================================== 01:05:19.164 Total : 27933.83 13.64 4581.93 1858.01 9497.79 01:05:19.164 01:05:19.164 11:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:19.424 11:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 01:05:19.424 11:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 01:05:19.694 true 01:05:19.694 11:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2566393 01:05:19.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2566393) - No such process 01:05:19.694 11:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2566393 01:05:19.694 11:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:19.953 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:20.522 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 01:05:20.522 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 01:05:20.522 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 01:05:20.522 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:05:20.522 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 01:05:20.522 null0 01:05:20.522 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:05:20.522 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:05:20.522 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 01:05:20.781 null1 01:05:21.041 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:05:21.041 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:05:21.041 11:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 01:05:21.301 null2 01:05:21.301 11:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:05:21.301 11:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:05:21.301 11:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 01:05:21.560 null3 01:05:21.560 11:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:05:21.560 11:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:05:21.560 11:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 01:05:21.820 null4 01:05:21.820 11:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:05:21.820 11:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:05:21.820 11:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 01:05:22.079 null5 01:05:22.079 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:05:22.079 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:05:22.079 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 01:05:22.338 null6 01:05:22.338 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:05:22.338 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:05:22.338 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 01:05:22.598 null7 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:05:22.598 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2570725 2570726 2570728 2570730 2570732 2570734 2570736 2570738 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:22.599 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:05:22.858 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:05:22.858 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:22.858 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:22.858 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:05:22.858 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:05:22.858 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:05:22.858 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:05:22.858 11:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.118 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:05:23.378 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:05:23.378 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:23.378 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:05:23.378 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:05:23.378 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:05:23.378 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:23.378 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:05:23.378 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:05:23.637 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:05:23.896 11:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:05:23.896 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:05:24.154 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.155 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.413 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.671 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:05:24.930 11:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:24.930 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:05:24.930 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:05:24.930 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:24.930 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:24.930 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:05:25.189 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:05:25.447 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.706 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.965 11:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:05:25.965 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:25.965 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:25.965 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:25.965 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:05:25.965 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:26.224 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:05:26.483 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:26.742 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:26.742 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:26.742 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:05:26.742 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:05:26.742 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:05:26.742 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:05:26.742 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:05:26.742 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:26.742 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:05:27.001 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.001 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.001 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:05:27.001 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:05:27.001 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.001 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.001 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:05:27.001 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.001 11:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.001 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.002 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:05:27.261 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:27.261 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:05:27.261 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:05:27.261 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:27.261 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:05:27.261 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:05:27.261 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:05:27.261 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.521 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:05:27.780 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:27.780 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:27.780 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:05:27.781 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:05:27.781 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:05:27.781 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:05:27.781 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:05:27.781 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:05:28.040 11:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:05:28.040 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:28.040 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:28.041 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:28.041 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:28.041 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:28.041 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:28.041 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:28.041 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:28.041 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:28.041 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:28.041 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:28.041 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:28.300 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:05:28.300 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:05:28.300 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 01:05:28.300 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 01:05:28.300 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 01:05:28.300 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 01:05:28.300 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:05:28.301 rmmod nvme_tcp 01:05:28.301 rmmod nvme_fabrics 01:05:28.301 rmmod nvme_keyring 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2566090 ']' 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2566090 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2566090 ']' 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2566090 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2566090 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2566090' 01:05:28.301 killing process with pid 2566090 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2566090 01:05:28.301 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2566090 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:28.561 11:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:05:31.099 01:05:31.099 real 0m50.386s 01:05:31.099 user 3m17.016s 01:05:31.099 sys 0m29.109s 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:05:31.099 ************************************ 01:05:31.099 END TEST nvmf_ns_hotplug_stress 01:05:31.099 ************************************ 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:05:31.099 ************************************ 01:05:31.099 START TEST nvmf_delete_subsystem 01:05:31.099 ************************************ 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 01:05:31.099 * Looking for test storage... 01:05:31.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 01:05:31.099 11:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 01:05:31.099 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:05:31.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:31.100 --rc genhtml_branch_coverage=1 01:05:31.100 --rc genhtml_function_coverage=1 01:05:31.100 --rc genhtml_legend=1 01:05:31.100 --rc geninfo_all_blocks=1 01:05:31.100 --rc geninfo_unexecuted_blocks=1 01:05:31.100 01:05:31.100 ' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:05:31.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:31.100 --rc genhtml_branch_coverage=1 01:05:31.100 --rc genhtml_function_coverage=1 01:05:31.100 --rc genhtml_legend=1 01:05:31.100 --rc geninfo_all_blocks=1 01:05:31.100 --rc geninfo_unexecuted_blocks=1 01:05:31.100 01:05:31.100 ' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:05:31.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:31.100 --rc genhtml_branch_coverage=1 01:05:31.100 --rc genhtml_function_coverage=1 01:05:31.100 --rc genhtml_legend=1 01:05:31.100 --rc geninfo_all_blocks=1 01:05:31.100 --rc geninfo_unexecuted_blocks=1 01:05:31.100 01:05:31.100 ' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:05:31.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:31.100 --rc genhtml_branch_coverage=1 01:05:31.100 --rc genhtml_function_coverage=1 01:05:31.100 --rc genhtml_legend=1 01:05:31.100 --rc geninfo_all_blocks=1 01:05:31.100 --rc geninfo_unexecuted_blocks=1 01:05:31.100 01:05:31.100 ' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 01:05:31.100 11:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:05:37.687 Found 0000:af:00.0 (0x8086 - 0x159b) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:05:37.687 Found 0000:af:00.1 (0x8086 - 0x159b) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:05:37.687 Found net devices under 0000:af:00.0: cvl_0_0 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:05:37.687 Found net devices under 0000:af:00.1: cvl_0_1 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:05:37.687 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:05:37.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:05:37.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 01:05:37.688 01:05:37.688 --- 10.0.0.2 ping statistics --- 01:05:37.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:37.688 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:05:37.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:05:37.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 01:05:37.688 01:05:37.688 --- 10.0.0.1 ping statistics --- 01:05:37.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:37.688 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2574841 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2574841 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2574841 ']' 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:37.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:37.688 11:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:37.688 [2024-12-09 11:16:38.846168] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:05:37.688 [2024-12-09 11:16:38.847669] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:05:37.688 [2024-12-09 11:16:38.847723] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:05:37.948 [2024-12-09 11:16:38.980412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:05:37.948 [2024-12-09 11:16:39.033027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:05:37.948 [2024-12-09 11:16:39.033077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:05:37.948 [2024-12-09 11:16:39.033093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:05:37.948 [2024-12-09 11:16:39.033108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:05:37.948 [2024-12-09 11:16:39.033119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:05:37.948 [2024-12-09 11:16:39.034473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:05:37.948 [2024-12-09 11:16:39.034486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:05:37.948 [2024-12-09 11:16:39.116086] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:05:37.948 [2024-12-09 11:16:39.116120] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:05:37.948 [2024-12-09 11:16:39.116371] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:38.209 [2024-12-09 11:16:39.198492] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:38.209 [2024-12-09 11:16:39.218885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:38.209 NULL1 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:38.209 Delay0 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2574862 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 01:05:38.209 11:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 01:05:38.209 [2024-12-09 11:16:39.313661] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:05:40.123 11:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:05:40.123 11:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:40.123 11:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 [2024-12-09 11:16:41.448044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f789c00d680 is same with the state(6) to be set 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Write completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 Read completed with error (sct=0, sc=8) 01:05:40.383 starting I/O failed: -6 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 [2024-12-09 11:16:41.448677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f2410 is same with the state(6) to be set 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Write completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 Read completed with error (sct=0, sc=8) 01:05:40.384 [2024-12-09 11:16:41.448934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f789c000c40 is same with the state(6) to be set 01:05:41.323 [2024-12-09 11:16:42.409534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f3720 is same with the state(6) to be set 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 [2024-12-09 11:16:42.449374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f789c00d350 is same with the state(6) to be set 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 [2024-12-09 11:16:42.450922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f3ae0 is same with the state(6) to be set 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 [2024-12-09 11:16:42.451561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f3900 is same with the state(6) to be set 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Write completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 Read completed with error (sct=0, sc=8) 01:05:41.323 [2024-12-09 11:16:42.452463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f2740 is same with the state(6) to be set 01:05:41.323 Initializing NVMe Controllers 01:05:41.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:05:41.323 Controller IO queue size 128, less than required. 01:05:41.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:05:41.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:05:41.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:05:41.323 Initialization complete. Launching workers. 01:05:41.323 ======================================================== 01:05:41.323 Latency(us) 01:05:41.323 Device Information : IOPS MiB/s Average min max 01:05:41.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.07 0.09 985834.72 1117.88 2002826.44 01:05:41.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.74 0.08 902932.42 516.83 2000602.74 01:05:41.323 ======================================================== 01:05:41.323 Total : 328.81 0.16 947071.95 516.83 2002826.44 01:05:41.323 01:05:41.323 [2024-12-09 11:16:42.453216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f3720 (9): Bad file descriptor 01:05:41.323 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:41.323 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 01:05:41.323 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2574862 01:05:41.323 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 01:05:41.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2574862 01:05:41.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2574862) - No such process 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2574862 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2574862 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2574862 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:41.901 [2024-12-09 11:16:42.986863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:41.901 11:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:41.901 11:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:41.901 11:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2575390 01:05:41.901 11:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 01:05:41.901 11:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 01:05:41.901 11:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2575390 01:05:41.901 11:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:05:41.901 [2024-12-09 11:16:43.072341] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:05:42.468 11:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:05:42.468 11:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2575390 01:05:42.468 11:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:05:43.036 11:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:05:43.036 11:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2575390 01:05:43.036 11:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:05:43.605 11:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:05:43.605 11:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2575390 01:05:43.605 11:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:05:43.864 11:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:05:43.864 11:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2575390 01:05:43.864 11:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:05:44.437 11:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:05:44.437 11:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2575390 01:05:44.437 11:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:05:45.006 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:05:45.006 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2575390 01:05:45.006 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:05:45.265 Initializing NVMe Controllers 01:05:45.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:05:45.265 Controller IO queue size 128, less than required. 01:05:45.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:05:45.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:05:45.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:05:45.265 Initialization complete. Launching workers. 01:05:45.265 ======================================================== 01:05:45.265 Latency(us) 01:05:45.265 Device Information : IOPS MiB/s Average min max 01:05:45.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004777.57 1000160.70 1013810.59 01:05:45.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003962.33 1000205.97 1041244.26 01:05:45.265 ======================================================== 01:05:45.265 Total : 256.00 0.12 1004369.95 1000160.70 1041244.26 01:05:45.265 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2575390 01:05:45.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2575390) - No such process 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2575390 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:05:45.524 rmmod nvme_tcp 01:05:45.524 rmmod nvme_fabrics 01:05:45.524 rmmod nvme_keyring 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2574841 ']' 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2574841 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2574841 ']' 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2574841 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2574841 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2574841' 01:05:45.524 killing process with pid 2574841 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2574841 01:05:45.524 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2574841 01:05:45.784 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:05:45.784 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:05:45.784 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:05:45.784 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 01:05:46.044 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 01:05:46.044 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:05:46.044 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 01:05:46.044 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:05:46.044 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 01:05:46.044 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:46.044 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:46.044 11:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:47.952 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:05:47.952 01:05:47.952 real 0m17.242s 01:05:47.952 user 0m25.938s 01:05:47.952 sys 0m7.706s 01:05:47.952 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 01:05:47.952 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:05:47.952 ************************************ 01:05:47.952 END TEST nvmf_delete_subsystem 01:05:47.952 ************************************ 01:05:47.953 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 01:05:47.953 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:05:47.953 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:05:47.953 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:05:48.212 ************************************ 01:05:48.212 START TEST nvmf_host_management 01:05:48.212 ************************************ 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 01:05:48.212 * Looking for test storage... 01:05:48.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:05:48.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:48.212 --rc genhtml_branch_coverage=1 01:05:48.212 --rc genhtml_function_coverage=1 01:05:48.212 --rc genhtml_legend=1 01:05:48.212 --rc geninfo_all_blocks=1 01:05:48.212 --rc geninfo_unexecuted_blocks=1 01:05:48.212 01:05:48.212 ' 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:05:48.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:48.212 --rc genhtml_branch_coverage=1 01:05:48.212 --rc genhtml_function_coverage=1 01:05:48.212 --rc genhtml_legend=1 01:05:48.212 --rc geninfo_all_blocks=1 01:05:48.212 --rc geninfo_unexecuted_blocks=1 01:05:48.212 01:05:48.212 ' 01:05:48.212 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:05:48.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:48.212 --rc genhtml_branch_coverage=1 01:05:48.212 --rc genhtml_function_coverage=1 01:05:48.212 --rc genhtml_legend=1 01:05:48.212 --rc geninfo_all_blocks=1 01:05:48.212 --rc geninfo_unexecuted_blocks=1 01:05:48.212 01:05:48.212 ' 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:05:48.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:48.213 --rc genhtml_branch_coverage=1 01:05:48.213 --rc genhtml_function_coverage=1 01:05:48.213 --rc genhtml_legend=1 01:05:48.213 --rc geninfo_all_blocks=1 01:05:48.213 --rc geninfo_unexecuted_blocks=1 01:05:48.213 01:05:48.213 ' 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:48.213 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:48.472 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:05:48.472 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:05:48.472 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:48.472 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:48.472 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:05:48.472 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:48.472 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:05:48.472 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 01:05:48.472 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:48.472 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 01:05:48.473 11:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:05:55.294 Found 0000:af:00.0 (0x8086 - 0x159b) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:05:55.294 Found 0000:af:00.1 (0x8086 - 0x159b) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:05:55.294 Found net devices under 0000:af:00.0: cvl_0_0 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:05:55.294 Found net devices under 0000:af:00.1: cvl_0_1 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:05:55.294 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:05:55.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:05:55.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 01:05:55.295 01:05:55.295 --- 10.0.0.2 ping statistics --- 01:05:55.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:55.295 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:05:55.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:05:55.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 01:05:55.295 01:05:55.295 --- 10.0.0.1 ping statistics --- 01:05:55.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:55.295 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2578991 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2578991 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2578991 ']' 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:55.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:55.295 11:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:55.295 [2024-12-09 11:16:56.003928] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:05:55.295 [2024-12-09 11:16:56.005449] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:05:55.295 [2024-12-09 11:16:56.005504] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:05:55.295 [2024-12-09 11:16:56.107557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:05:55.295 [2024-12-09 11:16:56.154565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:05:55.295 [2024-12-09 11:16:56.154610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:05:55.295 [2024-12-09 11:16:56.154620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:05:55.295 [2024-12-09 11:16:56.154629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:05:55.295 [2024-12-09 11:16:56.154637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:05:55.295 [2024-12-09 11:16:56.156211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:05:55.295 [2024-12-09 11:16:56.156301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:05:55.295 [2024-12-09 11:16:56.156395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:05:55.295 [2024-12-09 11:16:56.156397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:05:55.295 [2024-12-09 11:16:56.237584] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:05:55.295 [2024-12-09 11:16:56.237728] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:05:55.295 [2024-12-09 11:16:56.237873] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:05:55.295 [2024-12-09 11:16:56.238272] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:05:55.295 [2024-12-09 11:16:56.238455] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:55.295 [2024-12-09 11:16:56.324916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:55.295 Malloc0 01:05:55.295 [2024-12-09 11:16:56.397070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2579175 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2579175 /var/tmp/bdevperf.sock 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2579175 ']' 01:05:55.295 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:05:55.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:05:55.296 { 01:05:55.296 "params": { 01:05:55.296 "name": "Nvme$subsystem", 01:05:55.296 "trtype": "$TEST_TRANSPORT", 01:05:55.296 "traddr": "$NVMF_FIRST_TARGET_IP", 01:05:55.296 "adrfam": "ipv4", 01:05:55.296 "trsvcid": "$NVMF_PORT", 01:05:55.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:05:55.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:05:55.296 "hdgst": ${hdgst:-false}, 01:05:55.296 "ddgst": ${ddgst:-false} 01:05:55.296 }, 01:05:55.296 "method": "bdev_nvme_attach_controller" 01:05:55.296 } 01:05:55.296 EOF 01:05:55.296 )") 01:05:55.296 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:05:55.556 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:05:55.556 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:05:55.556 11:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:05:55.556 "params": { 01:05:55.556 "name": "Nvme0", 01:05:55.556 "trtype": "tcp", 01:05:55.556 "traddr": "10.0.0.2", 01:05:55.556 "adrfam": "ipv4", 01:05:55.556 "trsvcid": "4420", 01:05:55.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:05:55.556 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:05:55.556 "hdgst": false, 01:05:55.556 "ddgst": false 01:05:55.556 }, 01:05:55.556 "method": "bdev_nvme_attach_controller" 01:05:55.556 }' 01:05:55.556 [2024-12-09 11:16:56.515634] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:05:55.556 [2024-12-09 11:16:56.515720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2579175 ] 01:05:55.556 [2024-12-09 11:16:56.642978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:55.556 [2024-12-09 11:16:56.694341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:05:56.125 Running I/O for 10 seconds... 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=88 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 88 -ge 100 ']' 01:05:56.125 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.386 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:56.386 [2024-12-09 11:16:57.454211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:05:56.386 [2024-12-09 11:16:57.454264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:05:56.386 [2024-12-09 11:16:57.454298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:05:56.386 [2024-12-09 11:16:57.454330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:05:56.386 [2024-12-09 11:16:57.454360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4a20 is same with the state(6) to be set 01:05:56.386 [2024-12-09 11:16:57.454676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.386 [2024-12-09 11:16:57.454696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.386 [2024-12-09 11:16:57.454734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.386 [2024-12-09 11:16:57.454767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.386 [2024-12-09 11:16:57.454799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.386 [2024-12-09 11:16:57.454836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.386 [2024-12-09 11:16:57.454868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.386 [2024-12-09 11:16:57.454899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.386 [2024-12-09 11:16:57.454930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.386 [2024-12-09 11:16:57.454962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.386 [2024-12-09 11:16:57.454978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.386 [2024-12-09 11:16:57.454993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.455968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.455984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.456005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.456022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.456036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.456053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.456067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.456084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.456098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.456115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.456129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.456145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.456160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.456177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.456193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.456209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.387 [2024-12-09 11:16:57.456224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.387 [2024-12-09 11:16:57.456241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.456688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:56.388 [2024-12-09 11:16:57.456704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.388 [2024-12-09 11:16:57.458066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:05:56.388 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:05:56.388 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.388 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:56.388 task offset: 81920 on job bdev=Nvme0n1 fails 01:05:56.388 01:05:56.388 Latency(us) 01:05:56.388 [2024-12-09T10:16:57.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:56.388 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:05:56.388 Job: Nvme0n1 ended in about 0.43 seconds with error 01:05:56.388 Verification LBA range: start 0x0 length 0x400 01:05:56.388 Nvme0n1 : 0.43 1478.00 92.38 147.80 0.00 37997.73 2934.87 34420.65 01:05:56.388 [2024-12-09T10:16:57.564Z] =================================================================================================================== 01:05:56.388 [2024-12-09T10:16:57.564Z] Total : 1478.00 92.38 147.80 0.00 37997.73 2934.87 34420.65 01:05:56.388 [2024-12-09 11:16:57.461364] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:56.388 [2024-12-09 11:16:57.461394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4a20 (9): Bad file descriptor 01:05:56.388 [2024-12-09 11:16:57.462519] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 01:05:56.388 [2024-12-09 11:16:57.462601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:05:56.388 [2024-12-09 11:16:57.462636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:56.388 [2024-12-09 11:16:57.462671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 01:05:56.388 [2024-12-09 11:16:57.462686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 01:05:56.388 [2024-12-09 11:16:57.462701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:05:56.388 [2024-12-09 11:16:57.462715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1be4a20 01:05:56.388 [2024-12-09 11:16:57.462744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4a20 (9): Bad file descriptor 01:05:56.388 [2024-12-09 11:16:57.462766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:05:56.388 [2024-12-09 11:16:57.462781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:05:56.388 [2024-12-09 11:16:57.462797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:05:56.388 [2024-12-09 11:16:57.462811] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:05:56.388 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.388 11:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2579175 01:05:57.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2579175) - No such process 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:05:57.335 { 01:05:57.335 "params": { 01:05:57.335 "name": "Nvme$subsystem", 01:05:57.335 "trtype": "$TEST_TRANSPORT", 01:05:57.335 "traddr": "$NVMF_FIRST_TARGET_IP", 01:05:57.335 "adrfam": "ipv4", 01:05:57.335 "trsvcid": "$NVMF_PORT", 01:05:57.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:05:57.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:05:57.335 "hdgst": ${hdgst:-false}, 01:05:57.335 "ddgst": ${ddgst:-false} 01:05:57.335 }, 01:05:57.335 "method": "bdev_nvme_attach_controller" 01:05:57.335 } 01:05:57.335 EOF 01:05:57.335 )") 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:05:57.335 11:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:05:57.335 "params": { 01:05:57.335 "name": "Nvme0", 01:05:57.335 "trtype": "tcp", 01:05:57.335 "traddr": "10.0.0.2", 01:05:57.335 "adrfam": "ipv4", 01:05:57.335 "trsvcid": "4420", 01:05:57.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:05:57.335 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:05:57.335 "hdgst": false, 01:05:57.335 "ddgst": false 01:05:57.335 }, 01:05:57.335 "method": "bdev_nvme_attach_controller" 01:05:57.335 }' 01:05:57.595 [2024-12-09 11:16:58.530509] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:05:57.595 [2024-12-09 11:16:58.530570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2579403 ] 01:05:57.595 [2024-12-09 11:16:58.642148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:57.595 [2024-12-09 11:16:58.693340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:05:57.854 Running I/O for 1 seconds... 01:05:58.812 1561.00 IOPS, 97.56 MiB/s 01:05:58.812 Latency(us) 01:05:58.813 [2024-12-09T10:16:59.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:58.813 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:05:58.813 Verification LBA range: start 0x0 length 0x400 01:05:58.813 Nvme0n1 : 1.01 1606.31 100.39 0.00 0.00 38875.94 2251.02 34420.65 01:05:58.813 [2024-12-09T10:16:59.989Z] =================================================================================================================== 01:05:58.813 [2024-12-09T10:16:59.989Z] Total : 1606.31 100.39 0.00 0.00 38875.94 2251.02 34420.65 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:05:59.074 rmmod nvme_tcp 01:05:59.074 rmmod nvme_fabrics 01:05:59.074 rmmod nvme_keyring 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2578991 ']' 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2578991 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2578991 ']' 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2578991 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:59.074 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2578991 01:05:59.334 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:05:59.334 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:05:59.334 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2578991' 01:05:59.334 killing process with pid 2578991 01:05:59.334 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2578991 01:05:59.334 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2578991 01:05:59.334 [2024-12-09 11:17:00.507916] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:59.595 11:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:01.503 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:06:01.503 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 01:06:01.503 01:06:01.503 real 0m13.494s 01:06:01.503 user 0m19.544s 01:06:01.503 sys 0m7.591s 01:06:01.503 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 01:06:01.503 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:06:01.503 ************************************ 01:06:01.503 END TEST nvmf_host_management 01:06:01.503 ************************************ 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:06:01.763 ************************************ 01:06:01.763 START TEST nvmf_lvol 01:06:01.763 ************************************ 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 01:06:01.763 * Looking for test storage... 01:06:01.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:06:01.763 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:06:02.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:02.024 --rc genhtml_branch_coverage=1 01:06:02.024 --rc genhtml_function_coverage=1 01:06:02.024 --rc genhtml_legend=1 01:06:02.024 --rc geninfo_all_blocks=1 01:06:02.024 --rc geninfo_unexecuted_blocks=1 01:06:02.024 01:06:02.024 ' 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:06:02.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:02.024 --rc genhtml_branch_coverage=1 01:06:02.024 --rc genhtml_function_coverage=1 01:06:02.024 --rc genhtml_legend=1 01:06:02.024 --rc geninfo_all_blocks=1 01:06:02.024 --rc geninfo_unexecuted_blocks=1 01:06:02.024 01:06:02.024 ' 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:06:02.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:02.024 --rc genhtml_branch_coverage=1 01:06:02.024 --rc genhtml_function_coverage=1 01:06:02.024 --rc genhtml_legend=1 01:06:02.024 --rc geninfo_all_blocks=1 01:06:02.024 --rc geninfo_unexecuted_blocks=1 01:06:02.024 01:06:02.024 ' 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:06:02.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:02.024 --rc genhtml_branch_coverage=1 01:06:02.024 --rc genhtml_function_coverage=1 01:06:02.024 --rc genhtml_legend=1 01:06:02.024 --rc geninfo_all_blocks=1 01:06:02.024 --rc geninfo_unexecuted_blocks=1 01:06:02.024 01:06:02.024 ' 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 01:06:02.024 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 01:06:02.025 11:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:06:08.604 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:06:08.605 Found 0000:af:00.0 (0x8086 - 0x159b) 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:06:08.605 Found 0000:af:00.1 (0x8086 - 0x159b) 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:06:08.605 Found net devices under 0000:af:00.0: cvl_0_0 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:06:08.605 Found net devices under 0000:af:00.1: cvl_0_1 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:06:08.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:08.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 01:06:08.605 01:06:08.605 --- 10.0.0.2 ping statistics --- 01:06:08.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:08.605 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:06:08.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:08.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 01:06:08.605 01:06:08.605 --- 10.0.0.1 ping statistics --- 01:06:08.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:08.605 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2582773 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2582773 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2582773 ']' 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:08.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:06:08.605 11:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 01:06:08.605 [2024-12-09 11:17:08.944768] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:06:08.605 [2024-12-09 11:17:08.946247] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:06:08.605 [2024-12-09 11:17:08.946298] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:08.605 [2024-12-09 11:17:09.078172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:06:08.605 [2024-12-09 11:17:09.129189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:08.606 [2024-12-09 11:17:09.129238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:08.606 [2024-12-09 11:17:09.129253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:08.606 [2024-12-09 11:17:09.129267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:08.606 [2024-12-09 11:17:09.129279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:08.606 [2024-12-09 11:17:09.130789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:08.606 [2024-12-09 11:17:09.130874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:06:08.606 [2024-12-09 11:17:09.130879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:06:08.606 [2024-12-09 11:17:09.209339] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:06:08.606 [2024-12-09 11:17:09.209476] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:06:08.606 [2024-12-09 11:17:09.209536] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:06:08.606 [2024-12-09 11:17:09.209765] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:06:08.606 11:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:08.606 11:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 01:06:08.606 11:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:06:08.606 11:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 01:06:08.606 11:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:06:08.606 11:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:08.606 11:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:06:08.606 [2024-12-09 11:17:09.547813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:08.606 11:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:06:08.866 11:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 01:06:08.866 11:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:06:09.126 11:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 01:06:09.126 11:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 01:06:09.386 11:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 01:06:09.646 11:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b4431497-dfb9-44b3-8673-efc103e0c692 01:06:09.646 11:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b4431497-dfb9-44b3-8673-efc103e0c692 lvol 20 01:06:09.907 11:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=99fd0fcb-bcbe-4f7e-aef1-415affaf7860 01:06:09.907 11:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:06:10.167 11:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 99fd0fcb-bcbe-4f7e-aef1-415affaf7860 01:06:10.427 11:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:06:10.687 [2024-12-09 11:17:11.683700] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:10.687 11:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:06:10.946 11:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2583198 01:06:10.946 11:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 01:06:10.947 11:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 01:06:11.886 11:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 99fd0fcb-bcbe-4f7e-aef1-415affaf7860 MY_SNAPSHOT 01:06:12.146 11:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=37ef47e2-092f-489e-8aa7-7e90c53c559d 01:06:12.146 11:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 99fd0fcb-bcbe-4f7e-aef1-415affaf7860 30 01:06:12.716 11:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 37ef47e2-092f-489e-8aa7-7e90c53c559d MY_CLONE 01:06:12.976 11:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=28e1749f-011a-4cb0-bd9f-a0100b0e3601 01:06:12.976 11:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 28e1749f-011a-4cb0-bd9f-a0100b0e3601 01:06:13.547 11:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2583198 01:06:21.696 Initializing NVMe Controllers 01:06:21.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 01:06:21.696 Controller IO queue size 128, less than required. 01:06:21.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:06:21.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 01:06:21.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 01:06:21.696 Initialization complete. Launching workers. 01:06:21.696 ======================================================== 01:06:21.696 Latency(us) 01:06:21.696 Device Information : IOPS MiB/s Average min max 01:06:21.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12701.10 49.61 10080.71 1091.94 86749.83 01:06:21.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8888.50 34.72 14405.41 5434.45 68704.74 01:06:21.696 ======================================================== 01:06:21.696 Total : 21589.60 84.33 11861.21 1091.94 86749.83 01:06:21.696 01:06:21.696 11:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:06:21.696 11:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 99fd0fcb-bcbe-4f7e-aef1-415affaf7860 01:06:21.956 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b4431497-dfb9-44b3-8673-efc103e0c692 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:06:22.216 rmmod nvme_tcp 01:06:22.216 rmmod nvme_fabrics 01:06:22.216 rmmod nvme_keyring 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2582773 ']' 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2582773 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2582773 ']' 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2582773 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:22.216 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2582773 01:06:22.476 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:06:22.476 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:06:22.476 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2582773' 01:06:22.476 killing process with pid 2582773 01:06:22.476 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2582773 01:06:22.476 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2582773 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:22.737 11:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:24.649 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:06:24.649 01:06:24.649 real 0m23.095s 01:06:24.649 user 0m57.020s 01:06:24.649 sys 0m11.500s 01:06:24.649 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 01:06:24.649 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:06:24.649 ************************************ 01:06:24.649 END TEST nvmf_lvol 01:06:24.649 ************************************ 01:06:24.910 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 01:06:24.910 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:06:24.910 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:24.910 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:06:24.910 ************************************ 01:06:24.910 START TEST nvmf_lvs_grow 01:06:24.910 ************************************ 01:06:24.910 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 01:06:24.910 * Looking for test storage... 01:06:24.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:06:24.910 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:06:24.910 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 01:06:24.910 11:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:06:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:24.910 --rc genhtml_branch_coverage=1 01:06:24.910 --rc genhtml_function_coverage=1 01:06:24.910 --rc genhtml_legend=1 01:06:24.910 --rc geninfo_all_blocks=1 01:06:24.910 --rc geninfo_unexecuted_blocks=1 01:06:24.910 01:06:24.910 ' 01:06:24.910 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:06:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:24.910 --rc genhtml_branch_coverage=1 01:06:24.910 --rc genhtml_function_coverage=1 01:06:24.910 --rc genhtml_legend=1 01:06:24.910 --rc geninfo_all_blocks=1 01:06:24.910 --rc geninfo_unexecuted_blocks=1 01:06:24.910 01:06:24.910 ' 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:06:24.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:24.911 --rc genhtml_branch_coverage=1 01:06:24.911 --rc genhtml_function_coverage=1 01:06:24.911 --rc genhtml_legend=1 01:06:24.911 --rc geninfo_all_blocks=1 01:06:24.911 --rc geninfo_unexecuted_blocks=1 01:06:24.911 01:06:24.911 ' 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:06:24.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:24.911 --rc genhtml_branch_coverage=1 01:06:24.911 --rc genhtml_function_coverage=1 01:06:24.911 --rc genhtml_legend=1 01:06:24.911 --rc geninfo_all_blocks=1 01:06:24.911 --rc geninfo_unexecuted_blocks=1 01:06:24.911 01:06:24.911 ' 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 01:06:24.911 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:06:25.171 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:25.171 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 01:06:25.171 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 01:06:25.171 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 01:06:25.171 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:25.171 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:25.171 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:25.171 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:06:25.171 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:06:25.171 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 01:06:25.172 11:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:06:31.747 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:06:31.748 Found 0000:af:00.0 (0x8086 - 0x159b) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:06:31.748 Found 0000:af:00.1 (0x8086 - 0x159b) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:06:31.748 Found net devices under 0000:af:00.0: cvl_0_0 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:06:31.748 Found net devices under 0000:af:00.1: cvl_0_1 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:06:31.748 11:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:06:31.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:31.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 01:06:31.748 01:06:31.748 --- 10.0.0.2 ping statistics --- 01:06:31.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:31.748 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:06:31.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:31.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 01:06:31.748 01:06:31.748 --- 10.0.0.1 ping statistics --- 01:06:31.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:31.748 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2587686 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2587686 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2587686 ']' 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:31.748 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:31.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:31.749 [2024-12-09 11:17:32.119890] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:06:31.749 [2024-12-09 11:17:32.121391] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:06:31.749 [2024-12-09 11:17:32.121443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:31.749 [2024-12-09 11:17:32.254108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:31.749 [2024-12-09 11:17:32.306565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:31.749 [2024-12-09 11:17:32.306612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:31.749 [2024-12-09 11:17:32.306628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:31.749 [2024-12-09 11:17:32.306643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:31.749 [2024-12-09 11:17:32.306660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:31.749 [2024-12-09 11:17:32.307264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:06:31.749 [2024-12-09 11:17:32.394724] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:06:31.749 [2024-12-09 11:17:32.395051] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:06:31.749 [2024-12-09 11:17:32.720079] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:31.749 ************************************ 01:06:31.749 START TEST lvs_grow_clean 01:06:31.749 ************************************ 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 01:06:31.749 11:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:06:32.008 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:06:32.008 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:06:32.268 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8e60c745-69be-4e03-8895-1623d9eee771 01:06:32.268 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:06:32.268 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:32.528 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:06:32.528 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:06:32.528 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8e60c745-69be-4e03-8895-1623d9eee771 lvol 150 01:06:32.788 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e948511-f63c-4546-9f6a-9ce5768f574d 01:06:32.788 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 01:06:32.788 11:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:06:33.048 [2024-12-09 11:17:34.199826] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:06:33.048 [2024-12-09 11:17:34.199973] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:06:33.048 true 01:06:33.048 11:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:33.308 11:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:06:33.568 11:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:06:33.568 11:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:06:33.828 11:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e948511-f63c-4546-9f6a-9ce5768f574d 01:06:34.089 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:06:34.349 [2024-12-09 11:17:35.304317] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:34.349 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:06:34.610 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2588252 01:06:34.610 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:06:34.610 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:06:34.610 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2588252 /var/tmp/bdevperf.sock 01:06:34.610 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2588252 ']' 01:06:34.610 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:06:34.610 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:34.610 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:06:34.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:06:34.610 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:34.610 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:06:34.610 [2024-12-09 11:17:35.657061] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:06:34.610 [2024-12-09 11:17:35.657141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588252 ] 01:06:34.610 [2024-12-09 11:17:35.754120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:34.870 [2024-12-09 11:17:35.800880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:34.870 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:34.870 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 01:06:34.870 11:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:06:35.439 Nvme0n1 01:06:35.439 11:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:06:35.700 [ 01:06:35.700 { 01:06:35.700 "name": "Nvme0n1", 01:06:35.700 "aliases": [ 01:06:35.700 "6e948511-f63c-4546-9f6a-9ce5768f574d" 01:06:35.700 ], 01:06:35.700 "product_name": "NVMe disk", 01:06:35.700 "block_size": 4096, 01:06:35.700 "num_blocks": 38912, 01:06:35.700 "uuid": "6e948511-f63c-4546-9f6a-9ce5768f574d", 01:06:35.700 "numa_id": 1, 01:06:35.700 "assigned_rate_limits": { 01:06:35.700 "rw_ios_per_sec": 0, 01:06:35.700 "rw_mbytes_per_sec": 0, 01:06:35.700 "r_mbytes_per_sec": 0, 01:06:35.700 "w_mbytes_per_sec": 0 01:06:35.700 }, 01:06:35.700 "claimed": false, 01:06:35.700 "zoned": false, 01:06:35.700 "supported_io_types": { 01:06:35.700 "read": true, 01:06:35.700 "write": true, 01:06:35.700 "unmap": true, 01:06:35.700 "flush": true, 01:06:35.700 "reset": true, 01:06:35.700 "nvme_admin": true, 01:06:35.700 "nvme_io": true, 01:06:35.700 "nvme_io_md": false, 01:06:35.700 "write_zeroes": true, 01:06:35.700 "zcopy": false, 01:06:35.700 "get_zone_info": false, 01:06:35.700 "zone_management": false, 01:06:35.700 "zone_append": false, 01:06:35.700 "compare": true, 01:06:35.700 "compare_and_write": true, 01:06:35.700 "abort": true, 01:06:35.700 "seek_hole": false, 01:06:35.700 "seek_data": false, 01:06:35.700 "copy": true, 01:06:35.700 "nvme_iov_md": false 01:06:35.700 }, 01:06:35.700 "memory_domains": [ 01:06:35.700 { 01:06:35.700 "dma_device_id": "system", 01:06:35.700 "dma_device_type": 1 01:06:35.700 } 01:06:35.700 ], 01:06:35.700 "driver_specific": { 01:06:35.700 "nvme": [ 01:06:35.700 { 01:06:35.700 "trid": { 01:06:35.700 "trtype": "TCP", 01:06:35.700 "adrfam": "IPv4", 01:06:35.700 "traddr": "10.0.0.2", 01:06:35.700 "trsvcid": "4420", 01:06:35.700 "subnqn": "nqn.2016-06.io.spdk:cnode0" 01:06:35.700 }, 01:06:35.700 "ctrlr_data": { 01:06:35.700 "cntlid": 1, 01:06:35.700 "vendor_id": "0x8086", 01:06:35.700 "model_number": "SPDK bdev Controller", 01:06:35.700 "serial_number": "SPDK0", 01:06:35.700 "firmware_revision": "25.01", 01:06:35.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:06:35.700 "oacs": { 01:06:35.700 "security": 0, 01:06:35.700 "format": 0, 01:06:35.700 "firmware": 0, 01:06:35.700 "ns_manage": 0 01:06:35.700 }, 01:06:35.700 "multi_ctrlr": true, 01:06:35.700 "ana_reporting": false 01:06:35.700 }, 01:06:35.700 "vs": { 01:06:35.700 "nvme_version": "1.3" 01:06:35.700 }, 01:06:35.700 "ns_data": { 01:06:35.700 "id": 1, 01:06:35.700 "can_share": true 01:06:35.700 } 01:06:35.700 } 01:06:35.700 ], 01:06:35.700 "mp_policy": "active_passive" 01:06:35.700 } 01:06:35.700 } 01:06:35.700 ] 01:06:35.700 11:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2588352 01:06:35.700 11:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:06:35.700 11:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:06:35.700 Running I/O for 10 seconds... 01:06:37.079 Latency(us) 01:06:37.079 [2024-12-09T10:17:38.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:37.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:37.079 Nvme0n1 : 1.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 01:06:37.079 [2024-12-09T10:17:38.255Z] =================================================================================================================== 01:06:37.079 [2024-12-09T10:17:38.255Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 01:06:37.079 01:06:37.649 11:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:37.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:37.649 Nvme0n1 : 2.00 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 01:06:37.649 [2024-12-09T10:17:38.825Z] =================================================================================================================== 01:06:37.649 [2024-12-09T10:17:38.825Z] Total : 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 01:06:37.649 01:06:37.908 true 01:06:37.908 11:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:37.908 11:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:06:38.168 11:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:06:38.168 11:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:06:38.168 11:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2588352 01:06:38.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:38.752 Nvme0n1 : 3.00 15451.67 60.36 0.00 0.00 0.00 0.00 0.00 01:06:38.752 [2024-12-09T10:17:39.928Z] =================================================================================================================== 01:06:38.752 [2024-12-09T10:17:39.928Z] Total : 15451.67 60.36 0.00 0.00 0.00 0.00 0.00 01:06:38.752 01:06:39.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:39.688 Nvme0n1 : 4.00 15462.25 60.40 0.00 0.00 0.00 0.00 0.00 01:06:39.688 [2024-12-09T10:17:40.864Z] =================================================================================================================== 01:06:39.688 [2024-12-09T10:17:40.864Z] Total : 15462.25 60.40 0.00 0.00 0.00 0.00 0.00 01:06:39.688 01:06:41.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:41.065 Nvme0n1 : 5.00 15519.40 60.62 0.00 0.00 0.00 0.00 0.00 01:06:41.065 [2024-12-09T10:17:42.241Z] =================================================================================================================== 01:06:41.065 [2024-12-09T10:17:42.241Z] Total : 15519.40 60.62 0.00 0.00 0.00 0.00 0.00 01:06:41.065 01:06:42.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:42.003 Nvme0n1 : 6.00 15536.33 60.69 0.00 0.00 0.00 0.00 0.00 01:06:42.003 [2024-12-09T10:17:43.179Z] =================================================================================================================== 01:06:42.003 [2024-12-09T10:17:43.179Z] Total : 15536.33 60.69 0.00 0.00 0.00 0.00 0.00 01:06:42.003 01:06:42.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:42.941 Nvme0n1 : 7.00 15566.57 60.81 0.00 0.00 0.00 0.00 0.00 01:06:42.941 [2024-12-09T10:17:44.117Z] =================================================================================================================== 01:06:42.941 [2024-12-09T10:17:44.117Z] Total : 15566.57 60.81 0.00 0.00 0.00 0.00 0.00 01:06:42.941 01:06:43.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:43.878 Nvme0n1 : 8.00 15573.38 60.83 0.00 0.00 0.00 0.00 0.00 01:06:43.878 [2024-12-09T10:17:45.054Z] =================================================================================================================== 01:06:43.878 [2024-12-09T10:17:45.054Z] Total : 15573.38 60.83 0.00 0.00 0.00 0.00 0.00 01:06:43.878 01:06:44.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:44.813 Nvme0n1 : 9.00 15592.78 60.91 0.00 0.00 0.00 0.00 0.00 01:06:44.813 [2024-12-09T10:17:45.989Z] =================================================================================================================== 01:06:44.813 [2024-12-09T10:17:45.989Z] Total : 15592.78 60.91 0.00 0.00 0.00 0.00 0.00 01:06:44.813 01:06:45.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:45.750 Nvme0n1 : 10.00 15602.00 60.95 0.00 0.00 0.00 0.00 0.00 01:06:45.750 [2024-12-09T10:17:46.926Z] =================================================================================================================== 01:06:45.750 [2024-12-09T10:17:46.926Z] Total : 15602.00 60.95 0.00 0.00 0.00 0.00 0.00 01:06:45.750 01:06:45.750 01:06:45.750 Latency(us) 01:06:45.750 [2024-12-09T10:17:46.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:45.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:45.750 Nvme0n1 : 10.01 15607.17 60.97 0.00 0.00 8197.96 6781.55 25530.55 01:06:45.750 [2024-12-09T10:17:46.926Z] =================================================================================================================== 01:06:45.750 [2024-12-09T10:17:46.926Z] Total : 15607.17 60.97 0.00 0.00 8197.96 6781.55 25530.55 01:06:45.750 { 01:06:45.750 "results": [ 01:06:45.750 { 01:06:45.750 "job": "Nvme0n1", 01:06:45.750 "core_mask": "0x2", 01:06:45.750 "workload": "randwrite", 01:06:45.750 "status": "finished", 01:06:45.750 "queue_depth": 128, 01:06:45.750 "io_size": 4096, 01:06:45.751 "runtime": 10.008923, 01:06:45.751 "iops": 15607.173718890634, 01:06:45.751 "mibps": 60.96552233941654, 01:06:45.751 "io_failed": 0, 01:06:45.751 "io_timeout": 0, 01:06:45.751 "avg_latency_us": 8197.95557458098, 01:06:45.751 "min_latency_us": 6781.551304347826, 01:06:45.751 "max_latency_us": 25530.54608695652 01:06:45.751 } 01:06:45.751 ], 01:06:45.751 "core_count": 1 01:06:45.751 } 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2588252 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2588252 ']' 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2588252 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2588252 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2588252' 01:06:45.751 killing process with pid 2588252 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2588252 01:06:45.751 Received shutdown signal, test time was about 10.000000 seconds 01:06:45.751 01:06:45.751 Latency(us) 01:06:45.751 [2024-12-09T10:17:46.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:45.751 [2024-12-09T10:17:46.927Z] =================================================================================================================== 01:06:45.751 [2024-12-09T10:17:46.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:45.751 11:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2588252 01:06:46.010 11:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:06:46.268 11:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:06:46.526 11:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:46.526 11:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:06:47.096 11:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:06:47.096 11:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 01:06:47.096 11:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:06:47.096 [2024-12-09 11:17:48.227888] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:06:47.096 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:47.096 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 01:06:47.356 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:47.356 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:06:47.356 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:47.356 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:06:47.356 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:47.356 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:06:47.356 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:47.356 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:06:47.356 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 01:06:47.356 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:47.616 request: 01:06:47.616 { 01:06:47.616 "uuid": "8e60c745-69be-4e03-8895-1623d9eee771", 01:06:47.616 "method": "bdev_lvol_get_lvstores", 01:06:47.616 "req_id": 1 01:06:47.616 } 01:06:47.616 Got JSON-RPC error response 01:06:47.616 response: 01:06:47.616 { 01:06:47.616 "code": -19, 01:06:47.616 "message": "No such device" 01:06:47.616 } 01:06:47.616 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 01:06:47.616 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:47.616 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:47.616 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:47.616 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:06:47.876 aio_bdev 01:06:47.876 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6e948511-f63c-4546-9f6a-9ce5768f574d 01:06:47.876 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6e948511-f63c-4546-9f6a-9ce5768f574d 01:06:47.876 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:06:47.876 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 01:06:47.876 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:06:47.876 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:06:47.876 11:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 01:06:48.135 11:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e948511-f63c-4546-9f6a-9ce5768f574d -t 2000 01:06:48.395 [ 01:06:48.395 { 01:06:48.395 "name": "6e948511-f63c-4546-9f6a-9ce5768f574d", 01:06:48.395 "aliases": [ 01:06:48.395 "lvs/lvol" 01:06:48.395 ], 01:06:48.395 "product_name": "Logical Volume", 01:06:48.395 "block_size": 4096, 01:06:48.395 "num_blocks": 38912, 01:06:48.395 "uuid": "6e948511-f63c-4546-9f6a-9ce5768f574d", 01:06:48.395 "assigned_rate_limits": { 01:06:48.395 "rw_ios_per_sec": 0, 01:06:48.395 "rw_mbytes_per_sec": 0, 01:06:48.395 "r_mbytes_per_sec": 0, 01:06:48.395 "w_mbytes_per_sec": 0 01:06:48.395 }, 01:06:48.395 "claimed": false, 01:06:48.395 "zoned": false, 01:06:48.395 "supported_io_types": { 01:06:48.395 "read": true, 01:06:48.395 "write": true, 01:06:48.395 "unmap": true, 01:06:48.395 "flush": false, 01:06:48.395 "reset": true, 01:06:48.395 "nvme_admin": false, 01:06:48.395 "nvme_io": false, 01:06:48.395 "nvme_io_md": false, 01:06:48.395 "write_zeroes": true, 01:06:48.395 "zcopy": false, 01:06:48.395 "get_zone_info": false, 01:06:48.395 "zone_management": false, 01:06:48.395 "zone_append": false, 01:06:48.395 "compare": false, 01:06:48.395 "compare_and_write": false, 01:06:48.395 "abort": false, 01:06:48.395 "seek_hole": true, 01:06:48.395 "seek_data": true, 01:06:48.395 "copy": false, 01:06:48.395 "nvme_iov_md": false 01:06:48.395 }, 01:06:48.395 "driver_specific": { 01:06:48.395 "lvol": { 01:06:48.395 "lvol_store_uuid": "8e60c745-69be-4e03-8895-1623d9eee771", 01:06:48.395 "base_bdev": "aio_bdev", 01:06:48.395 "thin_provision": false, 01:06:48.395 "num_allocated_clusters": 38, 01:06:48.395 "snapshot": false, 01:06:48.395 "clone": false, 01:06:48.395 "esnap_clone": false 01:06:48.395 } 01:06:48.395 } 01:06:48.396 } 01:06:48.396 ] 01:06:48.396 11:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 01:06:48.396 11:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:06:48.396 11:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:48.655 11:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:06:48.655 11:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:48.655 11:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:06:48.915 11:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:06:48.915 11:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e948511-f63c-4546-9f6a-9ce5768f574d 01:06:49.174 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e60c745-69be-4e03-8895-1623d9eee771 01:06:49.434 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:06:49.693 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 01:06:49.953 01:06:49.953 real 0m18.098s 01:06:49.953 user 0m17.254s 01:06:49.953 sys 0m2.307s 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:06:49.953 ************************************ 01:06:49.953 END TEST lvs_grow_clean 01:06:49.953 ************************************ 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:49.953 ************************************ 01:06:49.953 START TEST lvs_grow_dirty 01:06:49.953 ************************************ 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 01:06:49.953 11:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:06:50.212 11:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:06:50.212 11:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:06:50.471 11:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e2800374-591d-4c8f-add8-04e056d9afac 01:06:50.471 11:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:06:50.471 11:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:06:50.731 11:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:06:50.731 11:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:06:50.731 11:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e2800374-591d-4c8f-add8-04e056d9afac lvol 150 01:06:50.990 11:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ee11e1a4-0450-42a3-82f2-72b8a8c23e7b 01:06:50.990 11:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 01:06:50.990 11:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:06:51.324 [2024-12-09 11:17:52.399813] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:06:51.324 [2024-12-09 11:17:52.399952] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:06:51.324 true 01:06:51.324 11:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:06:51.324 11:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:06:51.629 11:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:06:51.629 11:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:06:51.915 11:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ee11e1a4-0450-42a3-82f2-72b8a8c23e7b 01:06:52.174 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:06:52.433 [2024-12-09 11:17:53.532290] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:52.433 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:06:52.693 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2590610 01:06:52.693 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:06:52.693 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:06:52.693 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2590610 /var/tmp/bdevperf.sock 01:06:52.693 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2590610 ']' 01:06:52.693 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:06:52.693 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:52.693 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:06:52.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:06:52.693 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:52.693 11:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:06:52.953 [2024-12-09 11:17:53.890515] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:06:52.953 [2024-12-09 11:17:53.890594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590610 ] 01:06:52.953 [2024-12-09 11:17:53.987326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:52.953 [2024-12-09 11:17:54.033179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:53.212 11:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:53.212 11:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:06:53.212 11:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:06:53.471 Nvme0n1 01:06:53.471 11:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:06:53.731 [ 01:06:53.731 { 01:06:53.731 "name": "Nvme0n1", 01:06:53.731 "aliases": [ 01:06:53.731 "ee11e1a4-0450-42a3-82f2-72b8a8c23e7b" 01:06:53.731 ], 01:06:53.731 "product_name": "NVMe disk", 01:06:53.731 "block_size": 4096, 01:06:53.731 "num_blocks": 38912, 01:06:53.731 "uuid": "ee11e1a4-0450-42a3-82f2-72b8a8c23e7b", 01:06:53.731 "numa_id": 1, 01:06:53.731 "assigned_rate_limits": { 01:06:53.731 "rw_ios_per_sec": 0, 01:06:53.731 "rw_mbytes_per_sec": 0, 01:06:53.731 "r_mbytes_per_sec": 0, 01:06:53.731 "w_mbytes_per_sec": 0 01:06:53.731 }, 01:06:53.731 "claimed": false, 01:06:53.731 "zoned": false, 01:06:53.731 "supported_io_types": { 01:06:53.731 "read": true, 01:06:53.731 "write": true, 01:06:53.731 "unmap": true, 01:06:53.731 "flush": true, 01:06:53.731 "reset": true, 01:06:53.731 "nvme_admin": true, 01:06:53.731 "nvme_io": true, 01:06:53.731 "nvme_io_md": false, 01:06:53.731 "write_zeroes": true, 01:06:53.731 "zcopy": false, 01:06:53.731 "get_zone_info": false, 01:06:53.731 "zone_management": false, 01:06:53.731 "zone_append": false, 01:06:53.731 "compare": true, 01:06:53.731 "compare_and_write": true, 01:06:53.731 "abort": true, 01:06:53.731 "seek_hole": false, 01:06:53.731 "seek_data": false, 01:06:53.731 "copy": true, 01:06:53.731 "nvme_iov_md": false 01:06:53.731 }, 01:06:53.731 "memory_domains": [ 01:06:53.731 { 01:06:53.731 "dma_device_id": "system", 01:06:53.731 "dma_device_type": 1 01:06:53.731 } 01:06:53.731 ], 01:06:53.731 "driver_specific": { 01:06:53.731 "nvme": [ 01:06:53.731 { 01:06:53.731 "trid": { 01:06:53.731 "trtype": "TCP", 01:06:53.731 "adrfam": "IPv4", 01:06:53.731 "traddr": "10.0.0.2", 01:06:53.731 "trsvcid": "4420", 01:06:53.731 "subnqn": "nqn.2016-06.io.spdk:cnode0" 01:06:53.731 }, 01:06:53.731 "ctrlr_data": { 01:06:53.731 "cntlid": 1, 01:06:53.731 "vendor_id": "0x8086", 01:06:53.731 "model_number": "SPDK bdev Controller", 01:06:53.731 "serial_number": "SPDK0", 01:06:53.731 "firmware_revision": "25.01", 01:06:53.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:06:53.731 "oacs": { 01:06:53.731 "security": 0, 01:06:53.731 "format": 0, 01:06:53.731 "firmware": 0, 01:06:53.731 "ns_manage": 0 01:06:53.731 }, 01:06:53.731 "multi_ctrlr": true, 01:06:53.731 "ana_reporting": false 01:06:53.731 }, 01:06:53.731 "vs": { 01:06:53.731 "nvme_version": "1.3" 01:06:53.731 }, 01:06:53.731 "ns_data": { 01:06:53.731 "id": 1, 01:06:53.731 "can_share": true 01:06:53.731 } 01:06:53.731 } 01:06:53.731 ], 01:06:53.731 "mp_policy": "active_passive" 01:06:53.731 } 01:06:53.731 } 01:06:53.731 ] 01:06:53.731 11:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2590630 01:06:53.731 11:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:06:53.731 11:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:06:53.731 Running I/O for 10 seconds... 01:06:55.110 Latency(us) 01:06:55.110 [2024-12-09T10:17:56.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:55.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:55.110 Nvme0n1 : 1.00 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 01:06:55.110 [2024-12-09T10:17:56.286Z] =================================================================================================================== 01:06:55.110 [2024-12-09T10:17:56.286Z] Total : 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 01:06:55.110 01:06:55.680 11:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e2800374-591d-4c8f-add8-04e056d9afac 01:06:55.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:55.940 Nvme0n1 : 2.00 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 01:06:55.940 [2024-12-09T10:17:57.116Z] =================================================================================================================== 01:06:55.940 [2024-12-09T10:17:57.116Z] Total : 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 01:06:55.940 01:06:55.940 true 01:06:55.940 11:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:06:55.940 11:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:06:56.199 11:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:06:56.199 11:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:06:56.199 11:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2590630 01:06:56.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:56.768 Nvme0n1 : 3.00 15451.67 60.36 0.00 0.00 0.00 0.00 0.00 01:06:56.768 [2024-12-09T10:17:57.944Z] =================================================================================================================== 01:06:56.768 [2024-12-09T10:17:57.944Z] Total : 15451.67 60.36 0.00 0.00 0.00 0.00 0.00 01:06:56.768 01:06:58.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:58.149 Nvme0n1 : 4.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 01:06:58.149 [2024-12-09T10:17:59.325Z] =================================================================================================================== 01:06:58.149 [2024-12-09T10:17:59.325Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 01:06:58.149 01:06:59.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:59.087 Nvme0n1 : 5.00 15544.80 60.72 0.00 0.00 0.00 0.00 0.00 01:06:59.087 [2024-12-09T10:18:00.263Z] =================================================================================================================== 01:06:59.087 [2024-12-09T10:18:00.263Z] Total : 15544.80 60.72 0.00 0.00 0.00 0.00 0.00 01:06:59.087 01:07:00.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:07:00.024 Nvme0n1 : 6.00 15528.67 60.66 0.00 0.00 0.00 0.00 0.00 01:07:00.024 [2024-12-09T10:18:01.200Z] =================================================================================================================== 01:07:00.024 [2024-12-09T10:18:01.200Z] Total : 15528.67 60.66 0.00 0.00 0.00 0.00 0.00 01:07:00.024 01:07:00.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:07:00.959 Nvme0n1 : 7.00 15548.43 60.74 0.00 0.00 0.00 0.00 0.00 01:07:00.959 [2024-12-09T10:18:02.135Z] =================================================================================================================== 01:07:00.959 [2024-12-09T10:18:02.135Z] Total : 15548.43 60.74 0.00 0.00 0.00 0.00 0.00 01:07:00.959 01:07:01.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:07:01.894 Nvme0n1 : 8.00 15565.50 60.80 0.00 0.00 0.00 0.00 0.00 01:07:01.894 [2024-12-09T10:18:03.070Z] =================================================================================================================== 01:07:01.894 [2024-12-09T10:18:03.070Z] Total : 15565.50 60.80 0.00 0.00 0.00 0.00 0.00 01:07:01.894 01:07:02.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:07:02.831 Nvme0n1 : 9.00 15592.78 60.91 0.00 0.00 0.00 0.00 0.00 01:07:02.831 [2024-12-09T10:18:04.007Z] =================================================================================================================== 01:07:02.831 [2024-12-09T10:18:04.007Z] Total : 15592.78 60.91 0.00 0.00 0.00 0.00 0.00 01:07:02.831 01:07:03.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:07:03.769 Nvme0n1 : 10.00 15595.60 60.92 0.00 0.00 0.00 0.00 0.00 01:07:03.769 [2024-12-09T10:18:04.945Z] =================================================================================================================== 01:07:03.769 [2024-12-09T10:18:04.945Z] Total : 15595.60 60.92 0.00 0.00 0.00 0.00 0.00 01:07:03.769 01:07:03.769 01:07:03.769 Latency(us) 01:07:03.769 [2024-12-09T10:18:04.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:03.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:07:03.769 Nvme0n1 : 10.00 15601.52 60.94 0.00 0.00 8200.90 6582.09 25416.57 01:07:03.769 [2024-12-09T10:18:04.945Z] =================================================================================================================== 01:07:03.769 [2024-12-09T10:18:04.945Z] Total : 15601.52 60.94 0.00 0.00 8200.90 6582.09 25416.57 01:07:03.769 { 01:07:03.769 "results": [ 01:07:03.769 { 01:07:03.769 "job": "Nvme0n1", 01:07:03.769 "core_mask": "0x2", 01:07:03.769 "workload": "randwrite", 01:07:03.769 "status": "finished", 01:07:03.769 "queue_depth": 128, 01:07:03.769 "io_size": 4096, 01:07:03.769 "runtime": 10.004407, 01:07:03.769 "iops": 15601.524408193309, 01:07:03.769 "mibps": 60.94345471950511, 01:07:03.769 "io_failed": 0, 01:07:03.769 "io_timeout": 0, 01:07:03.769 "avg_latency_us": 8200.900103422571, 01:07:03.769 "min_latency_us": 6582.093913043478, 01:07:03.769 "max_latency_us": 25416.57043478261 01:07:03.769 } 01:07:03.769 ], 01:07:03.769 "core_count": 1 01:07:03.769 } 01:07:03.769 11:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2590610 01:07:03.769 11:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2590610 ']' 01:07:03.769 11:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2590610 01:07:03.769 11:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 01:07:04.036 11:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:04.036 11:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2590610 01:07:04.036 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:07:04.037 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:07:04.037 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2590610' 01:07:04.037 killing process with pid 2590610 01:07:04.037 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2590610 01:07:04.037 Received shutdown signal, test time was about 10.000000 seconds 01:07:04.037 01:07:04.037 Latency(us) 01:07:04.037 [2024-12-09T10:18:05.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:04.037 [2024-12-09T10:18:05.213Z] =================================================================================================================== 01:07:04.037 [2024-12-09T10:18:05.213Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:07:04.037 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2590610 01:07:04.300 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:07:04.300 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:07:04.558 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:07:04.559 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:07:05.126 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:07:05.126 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 01:07:05.127 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2587686 01:07:05.127 11:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2587686 01:07:05.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2587686 Killed "${NVMF_APP[@]}" "$@" 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2592182 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2592182 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2592182 ']' 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:05.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:05.127 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:07:05.127 [2024-12-09 11:18:06.125271] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:07:05.127 [2024-12-09 11:18:06.126804] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:07:05.127 [2024-12-09 11:18:06.126860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:05.127 [2024-12-09 11:18:06.261896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:05.386 [2024-12-09 11:18:06.317155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:05.386 [2024-12-09 11:18:06.317198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:05.386 [2024-12-09 11:18:06.317214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:05.386 [2024-12-09 11:18:06.317227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:05.386 [2024-12-09 11:18:06.317239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:05.386 [2024-12-09 11:18:06.317842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:07:05.386 [2024-12-09 11:18:06.405692] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:07:05.386 [2024-12-09 11:18:06.406001] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:07:05.386 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:05.386 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:07:05.386 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:07:05.386 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 01:07:05.386 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:07:05.386 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:05.386 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:07:05.645 [2024-12-09 11:18:06.664518] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 01:07:05.645 [2024-12-09 11:18:06.664771] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:07:05.645 [2024-12-09 11:18:06.664871] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:07:05.645 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 01:07:05.645 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ee11e1a4-0450-42a3-82f2-72b8a8c23e7b 01:07:05.645 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ee11e1a4-0450-42a3-82f2-72b8a8c23e7b 01:07:05.645 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:07:05.645 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:07:05.645 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:07:05.645 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:07:05.645 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 01:07:05.904 11:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ee11e1a4-0450-42a3-82f2-72b8a8c23e7b -t 2000 01:07:06.163 [ 01:07:06.163 { 01:07:06.163 "name": "ee11e1a4-0450-42a3-82f2-72b8a8c23e7b", 01:07:06.163 "aliases": [ 01:07:06.163 "lvs/lvol" 01:07:06.163 ], 01:07:06.163 "product_name": "Logical Volume", 01:07:06.163 "block_size": 4096, 01:07:06.163 "num_blocks": 38912, 01:07:06.163 "uuid": "ee11e1a4-0450-42a3-82f2-72b8a8c23e7b", 01:07:06.163 "assigned_rate_limits": { 01:07:06.163 "rw_ios_per_sec": 0, 01:07:06.163 "rw_mbytes_per_sec": 0, 01:07:06.163 "r_mbytes_per_sec": 0, 01:07:06.163 "w_mbytes_per_sec": 0 01:07:06.163 }, 01:07:06.163 "claimed": false, 01:07:06.163 "zoned": false, 01:07:06.163 "supported_io_types": { 01:07:06.163 "read": true, 01:07:06.163 "write": true, 01:07:06.163 "unmap": true, 01:07:06.163 "flush": false, 01:07:06.163 "reset": true, 01:07:06.163 "nvme_admin": false, 01:07:06.163 "nvme_io": false, 01:07:06.163 "nvme_io_md": false, 01:07:06.163 "write_zeroes": true, 01:07:06.163 "zcopy": false, 01:07:06.163 "get_zone_info": false, 01:07:06.163 "zone_management": false, 01:07:06.163 "zone_append": false, 01:07:06.163 "compare": false, 01:07:06.163 "compare_and_write": false, 01:07:06.163 "abort": false, 01:07:06.163 "seek_hole": true, 01:07:06.163 "seek_data": true, 01:07:06.163 "copy": false, 01:07:06.163 "nvme_iov_md": false 01:07:06.163 }, 01:07:06.163 "driver_specific": { 01:07:06.163 "lvol": { 01:07:06.163 "lvol_store_uuid": "e2800374-591d-4c8f-add8-04e056d9afac", 01:07:06.163 "base_bdev": "aio_bdev", 01:07:06.163 "thin_provision": false, 01:07:06.163 "num_allocated_clusters": 38, 01:07:06.163 "snapshot": false, 01:07:06.163 "clone": false, 01:07:06.163 "esnap_clone": false 01:07:06.163 } 01:07:06.163 } 01:07:06.163 } 01:07:06.163 ] 01:07:06.163 11:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:07:06.163 11:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:07:06.163 11:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 01:07:06.422 11:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 01:07:06.422 11:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:07:06.422 11:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 01:07:06.681 11:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 01:07:06.681 11:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:07:06.941 [2024-12-09 11:18:07.990477] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 01:07:06.941 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:07:07.201 request: 01:07:07.201 { 01:07:07.201 "uuid": "e2800374-591d-4c8f-add8-04e056d9afac", 01:07:07.201 "method": "bdev_lvol_get_lvstores", 01:07:07.201 "req_id": 1 01:07:07.201 } 01:07:07.201 Got JSON-RPC error response 01:07:07.201 response: 01:07:07.201 { 01:07:07.201 "code": -19, 01:07:07.201 "message": "No such device" 01:07:07.201 } 01:07:07.201 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 01:07:07.201 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:07:07.201 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:07:07.201 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:07:07.201 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:07:07.460 aio_bdev 01:07:07.460 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ee11e1a4-0450-42a3-82f2-72b8a8c23e7b 01:07:07.460 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ee11e1a4-0450-42a3-82f2-72b8a8c23e7b 01:07:07.460 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:07:07.460 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:07:07.460 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:07:07.460 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:07:07.460 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 01:07:08.028 11:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ee11e1a4-0450-42a3-82f2-72b8a8c23e7b -t 2000 01:07:08.029 [ 01:07:08.029 { 01:07:08.029 "name": "ee11e1a4-0450-42a3-82f2-72b8a8c23e7b", 01:07:08.029 "aliases": [ 01:07:08.029 "lvs/lvol" 01:07:08.029 ], 01:07:08.029 "product_name": "Logical Volume", 01:07:08.029 "block_size": 4096, 01:07:08.029 "num_blocks": 38912, 01:07:08.029 "uuid": "ee11e1a4-0450-42a3-82f2-72b8a8c23e7b", 01:07:08.029 "assigned_rate_limits": { 01:07:08.029 "rw_ios_per_sec": 0, 01:07:08.029 "rw_mbytes_per_sec": 0, 01:07:08.029 "r_mbytes_per_sec": 0, 01:07:08.029 "w_mbytes_per_sec": 0 01:07:08.029 }, 01:07:08.029 "claimed": false, 01:07:08.029 "zoned": false, 01:07:08.029 "supported_io_types": { 01:07:08.029 "read": true, 01:07:08.029 "write": true, 01:07:08.029 "unmap": true, 01:07:08.029 "flush": false, 01:07:08.029 "reset": true, 01:07:08.029 "nvme_admin": false, 01:07:08.029 "nvme_io": false, 01:07:08.029 "nvme_io_md": false, 01:07:08.029 "write_zeroes": true, 01:07:08.029 "zcopy": false, 01:07:08.029 "get_zone_info": false, 01:07:08.029 "zone_management": false, 01:07:08.029 "zone_append": false, 01:07:08.029 "compare": false, 01:07:08.029 "compare_and_write": false, 01:07:08.029 "abort": false, 01:07:08.029 "seek_hole": true, 01:07:08.029 "seek_data": true, 01:07:08.029 "copy": false, 01:07:08.029 "nvme_iov_md": false 01:07:08.029 }, 01:07:08.029 "driver_specific": { 01:07:08.029 "lvol": { 01:07:08.029 "lvol_store_uuid": "e2800374-591d-4c8f-add8-04e056d9afac", 01:07:08.029 "base_bdev": "aio_bdev", 01:07:08.029 "thin_provision": false, 01:07:08.029 "num_allocated_clusters": 38, 01:07:08.029 "snapshot": false, 01:07:08.029 "clone": false, 01:07:08.029 "esnap_clone": false 01:07:08.029 } 01:07:08.029 } 01:07:08.029 } 01:07:08.029 ] 01:07:08.029 11:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:07:08.029 11:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:07:08.029 11:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:07:08.288 11:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:07:08.288 11:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:07:08.288 11:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2800374-591d-4c8f-add8-04e056d9afac 01:07:08.856 11:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:07:08.856 11:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ee11e1a4-0450-42a3-82f2-72b8a8c23e7b 01:07:08.856 11:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2800374-591d-4c8f-add8-04e056d9afac 01:07:09.114 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:07:09.373 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 01:07:09.633 01:07:09.633 real 0m19.594s 01:07:09.633 user 0m36.631s 01:07:09.633 sys 0m4.786s 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:07:09.633 ************************************ 01:07:09.633 END TEST lvs_grow_dirty 01:07:09.633 ************************************ 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:07:09.633 nvmf_trace.0 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:07:09.633 rmmod nvme_tcp 01:07:09.633 rmmod nvme_fabrics 01:07:09.633 rmmod nvme_keyring 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2592182 ']' 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2592182 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2592182 ']' 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2592182 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2592182 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2592182' 01:07:09.633 killing process with pid 2592182 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2592182 01:07:09.633 11:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2592182 01:07:09.893 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:07:09.893 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:07:09.893 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:07:09.893 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 01:07:09.893 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:07:09.893 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 01:07:09.893 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 01:07:10.152 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:07:10.152 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 01:07:10.152 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:10.152 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:10.152 11:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:12.060 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:07:12.060 01:07:12.060 real 0m47.279s 01:07:12.060 user 0m56.934s 01:07:12.060 sys 0m12.010s 01:07:12.060 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 01:07:12.060 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:07:12.060 ************************************ 01:07:12.060 END TEST nvmf_lvs_grow 01:07:12.060 ************************************ 01:07:12.060 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 01:07:12.060 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:07:12.060 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:07:12.060 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:07:12.060 ************************************ 01:07:12.060 START TEST nvmf_bdev_io_wait 01:07:12.060 ************************************ 01:07:12.060 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 01:07:12.320 * Looking for test storage... 01:07:12.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:07:12.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:12.320 --rc genhtml_branch_coverage=1 01:07:12.320 --rc genhtml_function_coverage=1 01:07:12.320 --rc genhtml_legend=1 01:07:12.320 --rc geninfo_all_blocks=1 01:07:12.320 --rc geninfo_unexecuted_blocks=1 01:07:12.320 01:07:12.320 ' 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:07:12.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:12.320 --rc genhtml_branch_coverage=1 01:07:12.320 --rc genhtml_function_coverage=1 01:07:12.320 --rc genhtml_legend=1 01:07:12.320 --rc geninfo_all_blocks=1 01:07:12.320 --rc geninfo_unexecuted_blocks=1 01:07:12.320 01:07:12.320 ' 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:07:12.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:12.320 --rc genhtml_branch_coverage=1 01:07:12.320 --rc genhtml_function_coverage=1 01:07:12.320 --rc genhtml_legend=1 01:07:12.320 --rc geninfo_all_blocks=1 01:07:12.320 --rc geninfo_unexecuted_blocks=1 01:07:12.320 01:07:12.320 ' 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:07:12.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:12.320 --rc genhtml_branch_coverage=1 01:07:12.320 --rc genhtml_function_coverage=1 01:07:12.320 --rc genhtml_legend=1 01:07:12.320 --rc geninfo_all_blocks=1 01:07:12.320 --rc geninfo_unexecuted_blocks=1 01:07:12.320 01:07:12.320 ' 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:12.320 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 01:07:12.321 11:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:07:18.903 Found 0000:af:00.0 (0x8086 - 0x159b) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:07:18.903 Found 0000:af:00.1 (0x8086 - 0x159b) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:07:18.903 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:07:18.904 Found net devices under 0000:af:00.0: cvl_0_0 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:07:18.904 Found net devices under 0000:af:00.1: cvl_0_1 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:07:18.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:18.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 01:07:18.904 01:07:18.904 --- 10.0.0.2 ping statistics --- 01:07:18.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:18.904 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:07:18.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:18.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 01:07:18.904 01:07:18.904 --- 10.0.0.1 ping statistics --- 01:07:18.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:18.904 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2596421 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2596421 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2596421 ']' 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:18.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.904 [2024-12-09 11:18:19.463412] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:07:18.904 [2024-12-09 11:18:19.464886] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:07:18.904 [2024-12-09 11:18:19.464936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:18.904 [2024-12-09 11:18:19.596735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:07:18.904 [2024-12-09 11:18:19.650020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:18.904 [2024-12-09 11:18:19.650068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:18.904 [2024-12-09 11:18:19.650083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:18.904 [2024-12-09 11:18:19.650096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:18.904 [2024-12-09 11:18:19.650108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:18.904 [2024-12-09 11:18:19.651930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:18.904 [2024-12-09 11:18:19.652018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:07:18.904 [2024-12-09 11:18:19.652108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:07:18.904 [2024-12-09 11:18:19.652112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:07:18.904 [2024-12-09 11:18:19.652542] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 01:07:18.904 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.905 [2024-12-09 11:18:19.799886] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:07:18.905 [2024-12-09 11:18:19.799964] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:07:18.905 [2024-12-09 11:18:19.800670] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:07:18.905 [2024-12-09 11:18:19.801272] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.905 [2024-12-09 11:18:19.812950] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.905 Malloc0 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:18.905 [2024-12-09 11:18:19.881227] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2596447 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2596450 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:07:18.905 { 01:07:18.905 "params": { 01:07:18.905 "name": "Nvme$subsystem", 01:07:18.905 "trtype": "$TEST_TRANSPORT", 01:07:18.905 "traddr": "$NVMF_FIRST_TARGET_IP", 01:07:18.905 "adrfam": "ipv4", 01:07:18.905 "trsvcid": "$NVMF_PORT", 01:07:18.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:07:18.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:07:18.905 "hdgst": ${hdgst:-false}, 01:07:18.905 "ddgst": ${ddgst:-false} 01:07:18.905 }, 01:07:18.905 "method": "bdev_nvme_attach_controller" 01:07:18.905 } 01:07:18.905 EOF 01:07:18.905 )") 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2596453 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:07:18.905 { 01:07:18.905 "params": { 01:07:18.905 "name": "Nvme$subsystem", 01:07:18.905 "trtype": "$TEST_TRANSPORT", 01:07:18.905 "traddr": "$NVMF_FIRST_TARGET_IP", 01:07:18.905 "adrfam": "ipv4", 01:07:18.905 "trsvcid": "$NVMF_PORT", 01:07:18.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:07:18.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:07:18.905 "hdgst": ${hdgst:-false}, 01:07:18.905 "ddgst": ${ddgst:-false} 01:07:18.905 }, 01:07:18.905 "method": "bdev_nvme_attach_controller" 01:07:18.905 } 01:07:18.905 EOF 01:07:18.905 )") 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2596456 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:07:18.905 { 01:07:18.905 "params": { 01:07:18.905 "name": "Nvme$subsystem", 01:07:18.905 "trtype": "$TEST_TRANSPORT", 01:07:18.905 "traddr": "$NVMF_FIRST_TARGET_IP", 01:07:18.905 "adrfam": "ipv4", 01:07:18.905 "trsvcid": "$NVMF_PORT", 01:07:18.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:07:18.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:07:18.905 "hdgst": ${hdgst:-false}, 01:07:18.905 "ddgst": ${ddgst:-false} 01:07:18.905 }, 01:07:18.905 "method": "bdev_nvme_attach_controller" 01:07:18.905 } 01:07:18.905 EOF 01:07:18.905 )") 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:07:18.905 { 01:07:18.905 "params": { 01:07:18.905 "name": "Nvme$subsystem", 01:07:18.905 "trtype": "$TEST_TRANSPORT", 01:07:18.905 "traddr": "$NVMF_FIRST_TARGET_IP", 01:07:18.905 "adrfam": "ipv4", 01:07:18.905 "trsvcid": "$NVMF_PORT", 01:07:18.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:07:18.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:07:18.905 "hdgst": ${hdgst:-false}, 01:07:18.905 "ddgst": ${ddgst:-false} 01:07:18.905 }, 01:07:18.905 "method": "bdev_nvme_attach_controller" 01:07:18.905 } 01:07:18.905 EOF 01:07:18.905 )") 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2596447 01:07:18.905 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:07:18.906 "params": { 01:07:18.906 "name": "Nvme1", 01:07:18.906 "trtype": "tcp", 01:07:18.906 "traddr": "10.0.0.2", 01:07:18.906 "adrfam": "ipv4", 01:07:18.906 "trsvcid": "4420", 01:07:18.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:18.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:18.906 "hdgst": false, 01:07:18.906 "ddgst": false 01:07:18.906 }, 01:07:18.906 "method": "bdev_nvme_attach_controller" 01:07:18.906 }' 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:07:18.906 "params": { 01:07:18.906 "name": "Nvme1", 01:07:18.906 "trtype": "tcp", 01:07:18.906 "traddr": "10.0.0.2", 01:07:18.906 "adrfam": "ipv4", 01:07:18.906 "trsvcid": "4420", 01:07:18.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:18.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:18.906 "hdgst": false, 01:07:18.906 "ddgst": false 01:07:18.906 }, 01:07:18.906 "method": "bdev_nvme_attach_controller" 01:07:18.906 }' 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:07:18.906 "params": { 01:07:18.906 "name": "Nvme1", 01:07:18.906 "trtype": "tcp", 01:07:18.906 "traddr": "10.0.0.2", 01:07:18.906 "adrfam": "ipv4", 01:07:18.906 "trsvcid": "4420", 01:07:18.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:18.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:18.906 "hdgst": false, 01:07:18.906 "ddgst": false 01:07:18.906 }, 01:07:18.906 "method": "bdev_nvme_attach_controller" 01:07:18.906 }' 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:07:18.906 11:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:07:18.906 "params": { 01:07:18.906 "name": "Nvme1", 01:07:18.906 "trtype": "tcp", 01:07:18.906 "traddr": "10.0.0.2", 01:07:18.906 "adrfam": "ipv4", 01:07:18.906 "trsvcid": "4420", 01:07:18.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:18.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:18.906 "hdgst": false, 01:07:18.906 "ddgst": false 01:07:18.906 }, 01:07:18.906 "method": "bdev_nvme_attach_controller" 01:07:18.906 }' 01:07:18.906 [2024-12-09 11:18:19.936247] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:07:18.906 [2024-12-09 11:18:19.936313] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 01:07:18.906 [2024-12-09 11:18:19.943623] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:07:18.906 [2024-12-09 11:18:19.943707] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 01:07:18.906 [2024-12-09 11:18:19.945331] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:07:18.906 [2024-12-09 11:18:19.945400] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 01:07:18.906 [2024-12-09 11:18:19.949196] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:07:18.906 [2024-12-09 11:18:19.949266] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 01:07:19.165 [2024-12-09 11:18:20.118514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:19.165 [2024-12-09 11:18:20.162915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:07:19.165 [2024-12-09 11:18:20.248793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:19.165 [2024-12-09 11:18:20.299531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:07:19.423 [2024-12-09 11:18:20.383047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:19.423 [2024-12-09 11:18:20.439674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:19.424 [2024-12-09 11:18:20.444838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 01:07:19.424 [2024-12-09 11:18:20.488801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:07:19.424 Running I/O for 1 seconds... 01:07:19.682 Running I/O for 1 seconds... 01:07:19.682 Running I/O for 1 seconds... 01:07:19.682 Running I/O for 1 seconds... 01:07:20.620 6432.00 IOPS, 25.12 MiB/s 01:07:20.620 Latency(us) 01:07:20.620 [2024-12-09T10:18:21.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:20.620 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 01:07:20.620 Nvme1n1 : 1.02 6434.85 25.14 0.00 0.00 19667.20 4929.45 33736.79 01:07:20.620 [2024-12-09T10:18:21.796Z] =================================================================================================================== 01:07:20.620 [2024-12-09T10:18:21.796Z] Total : 6434.85 25.14 0.00 0.00 19667.20 4929.45 33736.79 01:07:20.620 5712.00 IOPS, 22.31 MiB/s [2024-12-09T10:18:21.796Z] 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2596450 01:07:20.620 01:07:20.620 Latency(us) 01:07:20.620 [2024-12-09T10:18:21.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:20.620 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 01:07:20.620 Nvme1n1 : 1.01 5794.26 22.63 0.00 0.00 22000.16 6525.11 39435.58 01:07:20.620 [2024-12-09T10:18:21.796Z] =================================================================================================================== 01:07:20.620 [2024-12-09T10:18:21.796Z] Total : 5794.26 22.63 0.00 0.00 22000.16 6525.11 39435.58 01:07:20.620 13092.00 IOPS, 51.14 MiB/s 01:07:20.620 Latency(us) 01:07:20.620 [2024-12-09T10:18:21.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:20.620 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 01:07:20.620 Nvme1n1 : 1.00 13179.81 51.48 0.00 0.00 9693.15 2051.56 14816.83 01:07:20.620 [2024-12-09T10:18:21.796Z] =================================================================================================================== 01:07:20.620 [2024-12-09T10:18:21.796Z] Total : 13179.81 51.48 0.00 0.00 9693.15 2051.56 14816.83 01:07:20.620 162968.00 IOPS, 636.59 MiB/s 01:07:20.620 Latency(us) 01:07:20.620 [2024-12-09T10:18:21.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:20.620 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 01:07:20.620 Nvme1n1 : 1.00 162601.85 635.16 0.00 0.00 782.71 336.58 2222.53 01:07:20.620 [2024-12-09T10:18:21.796Z] =================================================================================================================== 01:07:20.620 [2024-12-09T10:18:21.796Z] Total : 162601.85 635.16 0.00 0.00 782.71 336.58 2222.53 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2596453 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2596456 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 01:07:20.879 11:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:07:20.879 rmmod nvme_tcp 01:07:20.879 rmmod nvme_fabrics 01:07:20.879 rmmod nvme_keyring 01:07:20.879 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:07:20.879 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 01:07:20.879 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 01:07:20.879 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2596421 ']' 01:07:20.879 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2596421 01:07:20.879 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2596421 ']' 01:07:20.880 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2596421 01:07:20.880 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 01:07:20.880 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:20.880 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2596421 01:07:21.138 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:07:21.138 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:07:21.138 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2596421' 01:07:21.138 killing process with pid 2596421 01:07:21.138 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2596421 01:07:21.138 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2596421 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:21.397 11:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:23.303 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:07:23.303 01:07:23.303 real 0m11.201s 01:07:23.303 user 0m16.231s 01:07:23.303 sys 0m7.082s 01:07:23.303 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 01:07:23.303 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:07:23.303 ************************************ 01:07:23.303 END TEST nvmf_bdev_io_wait 01:07:23.303 ************************************ 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:07:23.563 ************************************ 01:07:23.563 START TEST nvmf_queue_depth 01:07:23.563 ************************************ 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 01:07:23.563 * Looking for test storage... 01:07:23.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:07:23.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:23.563 --rc genhtml_branch_coverage=1 01:07:23.563 --rc genhtml_function_coverage=1 01:07:23.563 --rc genhtml_legend=1 01:07:23.563 --rc geninfo_all_blocks=1 01:07:23.563 --rc geninfo_unexecuted_blocks=1 01:07:23.563 01:07:23.563 ' 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:07:23.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:23.563 --rc genhtml_branch_coverage=1 01:07:23.563 --rc genhtml_function_coverage=1 01:07:23.563 --rc genhtml_legend=1 01:07:23.563 --rc geninfo_all_blocks=1 01:07:23.563 --rc geninfo_unexecuted_blocks=1 01:07:23.563 01:07:23.563 ' 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:07:23.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:23.563 --rc genhtml_branch_coverage=1 01:07:23.563 --rc genhtml_function_coverage=1 01:07:23.563 --rc genhtml_legend=1 01:07:23.563 --rc geninfo_all_blocks=1 01:07:23.563 --rc geninfo_unexecuted_blocks=1 01:07:23.563 01:07:23.563 ' 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:07:23.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:23.563 --rc genhtml_branch_coverage=1 01:07:23.563 --rc genhtml_function_coverage=1 01:07:23.563 --rc genhtml_legend=1 01:07:23.563 --rc geninfo_all_blocks=1 01:07:23.563 --rc geninfo_unexecuted_blocks=1 01:07:23.563 01:07:23.563 ' 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:23.563 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:23.564 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 01:07:23.824 11:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:07:30.393 Found 0000:af:00.0 (0x8086 - 0x159b) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:07:30.393 Found 0000:af:00.1 (0x8086 - 0x159b) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:07:30.393 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:07:30.394 Found net devices under 0000:af:00.0: cvl_0_0 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:07:30.394 Found net devices under 0000:af:00.1: cvl_0_1 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:07:30.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:30.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 01:07:30.394 01:07:30.394 --- 10.0.0.2 ping statistics --- 01:07:30.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:30.394 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:07:30.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:30.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 01:07:30.394 01:07:30.394 --- 10.0.0.1 ping statistics --- 01:07:30.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:30.394 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2600052 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2600052 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2600052 ']' 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:30.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:30.394 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:30.394 [2024-12-09 11:18:31.543116] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:07:30.394 [2024-12-09 11:18:31.544574] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:07:30.394 [2024-12-09 11:18:31.544624] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:30.654 [2024-12-09 11:18:31.651897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:30.654 [2024-12-09 11:18:31.694662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:30.654 [2024-12-09 11:18:31.694706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:30.654 [2024-12-09 11:18:31.694717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:30.654 [2024-12-09 11:18:31.694726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:30.654 [2024-12-09 11:18:31.694734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:30.654 [2024-12-09 11:18:31.695206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:30.654 [2024-12-09 11:18:31.764275] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:07:30.654 [2024-12-09 11:18:31.764498] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:07:30.654 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:30.654 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:07:30.654 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:07:30.654 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 01:07:30.654 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:30.913 [2024-12-09 11:18:31.843942] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:30.913 Malloc0 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:30.913 [2024-12-09 11:18:31.907775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2600071 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2600071 /var/tmp/bdevperf.sock 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2600071 ']' 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:30.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:30.913 11:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:30.913 [2024-12-09 11:18:31.969502] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:07:30.913 [2024-12-09 11:18:31.969572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600071 ] 01:07:31.172 [2024-12-09 11:18:32.096885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:31.172 [2024-12-09 11:18:32.149467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:07:31.172 11:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:31.172 11:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:07:31.172 11:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:07:31.172 11:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:31.172 11:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:31.431 NVMe0n1 01:07:31.431 11:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:31.431 11:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:07:31.431 Running I/O for 10 seconds... 01:07:33.746 9709.00 IOPS, 37.93 MiB/s [2024-12-09T10:18:35.859Z] 10198.50 IOPS, 39.84 MiB/s [2024-12-09T10:18:36.796Z] 10242.33 IOPS, 40.01 MiB/s [2024-12-09T10:18:37.733Z] 10366.50 IOPS, 40.49 MiB/s [2024-12-09T10:18:38.670Z] 10443.60 IOPS, 40.80 MiB/s [2024-12-09T10:18:40.049Z] 10509.50 IOPS, 41.05 MiB/s [2024-12-09T10:18:40.987Z] 10554.86 IOPS, 41.23 MiB/s [2024-12-09T10:18:41.924Z] 10624.00 IOPS, 41.50 MiB/s [2024-12-09T10:18:42.861Z] 10675.78 IOPS, 41.70 MiB/s [2024-12-09T10:18:42.861Z] 10692.90 IOPS, 41.77 MiB/s 01:07:41.685 Latency(us) 01:07:41.685 [2024-12-09T10:18:42.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:41.685 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 01:07:41.685 Verification LBA range: start 0x0 length 0x4000 01:07:41.685 NVMe0n1 : 10.05 10724.45 41.89 0.00 0.00 95079.64 10371.78 67017.68 01:07:41.685 [2024-12-09T10:18:42.861Z] =================================================================================================================== 01:07:41.685 [2024-12-09T10:18:42.861Z] Total : 10724.45 41.89 0.00 0.00 95079.64 10371.78 67017.68 01:07:41.685 { 01:07:41.685 "results": [ 01:07:41.685 { 01:07:41.685 "job": "NVMe0n1", 01:07:41.685 "core_mask": "0x1", 01:07:41.685 "workload": "verify", 01:07:41.685 "status": "finished", 01:07:41.685 "verify_range": { 01:07:41.685 "start": 0, 01:07:41.685 "length": 16384 01:07:41.685 }, 01:07:41.685 "queue_depth": 1024, 01:07:41.685 "io_size": 4096, 01:07:41.685 "runtime": 10.054125, 01:07:41.685 "iops": 10724.4538933025, 01:07:41.685 "mibps": 41.89239802071289, 01:07:41.685 "io_failed": 0, 01:07:41.685 "io_timeout": 0, 01:07:41.685 "avg_latency_us": 95079.63778486477, 01:07:41.685 "min_latency_us": 10371.784347826087, 01:07:41.685 "max_latency_us": 67017.68347826086 01:07:41.685 } 01:07:41.685 ], 01:07:41.685 "core_count": 1 01:07:41.685 } 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2600071 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2600071 ']' 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2600071 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600071 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600071' 01:07:41.685 killing process with pid 2600071 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2600071 01:07:41.685 Received shutdown signal, test time was about 10.000000 seconds 01:07:41.685 01:07:41.685 Latency(us) 01:07:41.685 [2024-12-09T10:18:42.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:41.685 [2024-12-09T10:18:42.861Z] =================================================================================================================== 01:07:41.685 [2024-12-09T10:18:42.861Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:07:41.685 11:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2600071 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:07:41.945 rmmod nvme_tcp 01:07:41.945 rmmod nvme_fabrics 01:07:41.945 rmmod nvme_keyring 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2600052 ']' 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2600052 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2600052 ']' 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2600052 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:41.945 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600052 01:07:42.204 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:07:42.204 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:07:42.204 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600052' 01:07:42.204 killing process with pid 2600052 01:07:42.204 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2600052 01:07:42.204 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2600052 01:07:42.204 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:07:42.204 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:07:42.204 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:07:42.204 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 01:07:42.463 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 01:07:42.463 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:07:42.463 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 01:07:42.463 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:07:42.463 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 01:07:42.463 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:42.463 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:42.463 11:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:44.370 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:07:44.370 01:07:44.370 real 0m20.945s 01:07:44.370 user 0m23.690s 01:07:44.370 sys 0m7.165s 01:07:44.370 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 01:07:44.370 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:07:44.371 ************************************ 01:07:44.371 END TEST nvmf_queue_depth 01:07:44.371 ************************************ 01:07:44.371 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 01:07:44.371 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:07:44.371 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:07:44.371 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:07:44.630 ************************************ 01:07:44.630 START TEST nvmf_target_multipath 01:07:44.630 ************************************ 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 01:07:44.630 * Looking for test storage... 01:07:44.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:07:44.630 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:07:44.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:44.631 --rc genhtml_branch_coverage=1 01:07:44.631 --rc genhtml_function_coverage=1 01:07:44.631 --rc genhtml_legend=1 01:07:44.631 --rc geninfo_all_blocks=1 01:07:44.631 --rc geninfo_unexecuted_blocks=1 01:07:44.631 01:07:44.631 ' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:07:44.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:44.631 --rc genhtml_branch_coverage=1 01:07:44.631 --rc genhtml_function_coverage=1 01:07:44.631 --rc genhtml_legend=1 01:07:44.631 --rc geninfo_all_blocks=1 01:07:44.631 --rc geninfo_unexecuted_blocks=1 01:07:44.631 01:07:44.631 ' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:07:44.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:44.631 --rc genhtml_branch_coverage=1 01:07:44.631 --rc genhtml_function_coverage=1 01:07:44.631 --rc genhtml_legend=1 01:07:44.631 --rc geninfo_all_blocks=1 01:07:44.631 --rc geninfo_unexecuted_blocks=1 01:07:44.631 01:07:44.631 ' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:07:44.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:44.631 --rc genhtml_branch_coverage=1 01:07:44.631 --rc genhtml_function_coverage=1 01:07:44.631 --rc genhtml_legend=1 01:07:44.631 --rc geninfo_all_blocks=1 01:07:44.631 --rc geninfo_unexecuted_blocks=1 01:07:44.631 01:07:44.631 ' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:07:44.631 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:44.632 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:07:44.632 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:07:44.632 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:07:44.632 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:44.632 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:44.632 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:44.632 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:07:44.632 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:07:44.632 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 01:07:44.632 11:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:07:51.207 Found 0000:af:00.0 (0x8086 - 0x159b) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:07:51.207 Found 0000:af:00.1 (0x8086 - 0x159b) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:07:51.207 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:07:51.208 Found net devices under 0000:af:00.0: cvl_0_0 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:07:51.208 Found net devices under 0000:af:00.1: cvl_0_1 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:07:51.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:51.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 01:07:51.208 01:07:51.208 --- 10.0.0.2 ping statistics --- 01:07:51.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:51.208 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:07:51.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:51.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 01:07:51.208 01:07:51.208 --- 10.0.0.1 ping statistics --- 01:07:51.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:51.208 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 01:07:51.208 only one NIC for nvmf test 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:07:51.208 rmmod nvme_tcp 01:07:51.208 rmmod nvme_fabrics 01:07:51.208 rmmod nvme_keyring 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:51.208 11:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:07:53.115 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:07:53.116 01:07:53.116 real 0m8.511s 01:07:53.116 user 0m2.097s 01:07:53.116 sys 0m4.488s 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:53.116 ************************************ 01:07:53.116 END TEST nvmf_target_multipath 01:07:53.116 ************************************ 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:07:53.116 ************************************ 01:07:53.116 START TEST nvmf_zcopy 01:07:53.116 ************************************ 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 01:07:53.116 * Looking for test storage... 01:07:53.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 01:07:53.116 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:07:53.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:53.377 --rc genhtml_branch_coverage=1 01:07:53.377 --rc genhtml_function_coverage=1 01:07:53.377 --rc genhtml_legend=1 01:07:53.377 --rc geninfo_all_blocks=1 01:07:53.377 --rc geninfo_unexecuted_blocks=1 01:07:53.377 01:07:53.377 ' 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:07:53.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:53.377 --rc genhtml_branch_coverage=1 01:07:53.377 --rc genhtml_function_coverage=1 01:07:53.377 --rc genhtml_legend=1 01:07:53.377 --rc geninfo_all_blocks=1 01:07:53.377 --rc geninfo_unexecuted_blocks=1 01:07:53.377 01:07:53.377 ' 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:07:53.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:53.377 --rc genhtml_branch_coverage=1 01:07:53.377 --rc genhtml_function_coverage=1 01:07:53.377 --rc genhtml_legend=1 01:07:53.377 --rc geninfo_all_blocks=1 01:07:53.377 --rc geninfo_unexecuted_blocks=1 01:07:53.377 01:07:53.377 ' 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:07:53.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:53.377 --rc genhtml_branch_coverage=1 01:07:53.377 --rc genhtml_function_coverage=1 01:07:53.377 --rc genhtml_legend=1 01:07:53.377 --rc geninfo_all_blocks=1 01:07:53.377 --rc geninfo_unexecuted_blocks=1 01:07:53.377 01:07:53.377 ' 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:53.377 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 01:07:53.378 11:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:08:01.504 Found 0000:af:00.0 (0x8086 - 0x159b) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:08:01.504 Found 0000:af:00.1 (0x8086 - 0x159b) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:08:01.504 Found net devices under 0000:af:00.0: cvl_0_0 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:08:01.504 Found net devices under 0000:af:00.1: cvl_0_1 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:08:01.504 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:08:01.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:01.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 01:08:01.505 01:08:01.505 --- 10.0.0.2 ping statistics --- 01:08:01.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:01.505 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:08:01.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:01.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 01:08:01.505 01:08:01.505 --- 10.0.0.1 ping statistics --- 01:08:01.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:01.505 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2607763 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2607763 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2607763 ']' 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:01.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:01.505 [2024-12-09 11:19:01.480131] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:08:01.505 [2024-12-09 11:19:01.481653] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:08:01.505 [2024-12-09 11:19:01.481706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:01.505 [2024-12-09 11:19:01.583084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:01.505 [2024-12-09 11:19:01.627092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:01.505 [2024-12-09 11:19:01.627133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:01.505 [2024-12-09 11:19:01.627145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:01.505 [2024-12-09 11:19:01.627155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:01.505 [2024-12-09 11:19:01.627164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:01.505 [2024-12-09 11:19:01.627710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:01.505 [2024-12-09 11:19:01.707812] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:08:01.505 [2024-12-09 11:19:01.708044] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:01.505 [2024-12-09 11:19:01.784215] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:01.505 [2024-12-09 11:19:01.808488] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:01.505 malloc0 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:08:01.505 { 01:08:01.505 "params": { 01:08:01.505 "name": "Nvme$subsystem", 01:08:01.505 "trtype": "$TEST_TRANSPORT", 01:08:01.505 "traddr": "$NVMF_FIRST_TARGET_IP", 01:08:01.505 "adrfam": "ipv4", 01:08:01.505 "trsvcid": "$NVMF_PORT", 01:08:01.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:08:01.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:08:01.505 "hdgst": ${hdgst:-false}, 01:08:01.505 "ddgst": ${ddgst:-false} 01:08:01.505 }, 01:08:01.505 "method": "bdev_nvme_attach_controller" 01:08:01.505 } 01:08:01.505 EOF 01:08:01.505 )") 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:08:01.505 11:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:08:01.505 "params": { 01:08:01.505 "name": "Nvme1", 01:08:01.505 "trtype": "tcp", 01:08:01.506 "traddr": "10.0.0.2", 01:08:01.506 "adrfam": "ipv4", 01:08:01.506 "trsvcid": "4420", 01:08:01.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:08:01.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:08:01.506 "hdgst": false, 01:08:01.506 "ddgst": false 01:08:01.506 }, 01:08:01.506 "method": "bdev_nvme_attach_controller" 01:08:01.506 }' 01:08:01.506 [2024-12-09 11:19:01.888884] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:08:01.506 [2024-12-09 11:19:01.888936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607866 ] 01:08:01.506 [2024-12-09 11:19:01.998417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:01.506 [2024-12-09 11:19:02.052568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:01.506 Running I/O for 10 seconds... 01:08:03.517 8223.00 IOPS, 64.24 MiB/s [2024-12-09T10:19:05.643Z] 8249.50 IOPS, 64.45 MiB/s [2024-12-09T10:19:06.581Z] 8229.00 IOPS, 64.29 MiB/s [2024-12-09T10:19:07.518Z] 8255.25 IOPS, 64.49 MiB/s [2024-12-09T10:19:08.456Z] 8279.40 IOPS, 64.68 MiB/s [2024-12-09T10:19:09.837Z] 8274.33 IOPS, 64.64 MiB/s [2024-12-09T10:19:10.774Z] 8269.29 IOPS, 64.60 MiB/s [2024-12-09T10:19:11.711Z] 8268.12 IOPS, 64.59 MiB/s [2024-12-09T10:19:12.648Z] 8257.44 IOPS, 64.51 MiB/s [2024-12-09T10:19:12.648Z] 8272.10 IOPS, 64.63 MiB/s 01:08:11.472 Latency(us) 01:08:11.472 [2024-12-09T10:19:12.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:11.472 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 01:08:11.472 Verification LBA range: start 0x0 length 0x1000 01:08:11.472 Nvme1n1 : 10.01 8273.02 64.63 0.00 0.00 15411.18 1738.13 22795.13 01:08:11.472 [2024-12-09T10:19:12.648Z] =================================================================================================================== 01:08:11.472 [2024-12-09T10:19:12.648Z] Total : 8273.02 64.63 0.00 0.00 15411.18 1738.13 22795.13 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2609198 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:08:11.732 { 01:08:11.732 "params": { 01:08:11.732 "name": "Nvme$subsystem", 01:08:11.732 "trtype": "$TEST_TRANSPORT", 01:08:11.732 "traddr": "$NVMF_FIRST_TARGET_IP", 01:08:11.732 "adrfam": "ipv4", 01:08:11.732 "trsvcid": "$NVMF_PORT", 01:08:11.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:08:11.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:08:11.732 "hdgst": ${hdgst:-false}, 01:08:11.732 "ddgst": ${ddgst:-false} 01:08:11.732 }, 01:08:11.732 "method": "bdev_nvme_attach_controller" 01:08:11.732 } 01:08:11.732 EOF 01:08:11.732 )") 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:08:11.732 [2024-12-09 11:19:12.664128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.732 [2024-12-09 11:19:12.664163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:08:11.732 11:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:08:11.732 "params": { 01:08:11.732 "name": "Nvme1", 01:08:11.732 "trtype": "tcp", 01:08:11.732 "traddr": "10.0.0.2", 01:08:11.732 "adrfam": "ipv4", 01:08:11.732 "trsvcid": "4420", 01:08:11.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:08:11.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:08:11.732 "hdgst": false, 01:08:11.732 "ddgst": false 01:08:11.732 }, 01:08:11.732 "method": "bdev_nvme_attach_controller" 01:08:11.732 }' 01:08:11.732 [2024-12-09 11:19:12.676099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.732 [2024-12-09 11:19:12.676116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.732 [2024-12-09 11:19:12.688095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.732 [2024-12-09 11:19:12.688108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.732 [2024-12-09 11:19:12.700094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.732 [2024-12-09 11:19:12.700108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.732 [2024-12-09 11:19:12.703109] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:08:11.732 [2024-12-09 11:19:12.703177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609198 ] 01:08:11.732 [2024-12-09 11:19:12.712092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.732 [2024-12-09 11:19:12.712105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.728093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.728107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.740093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.740106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.752093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.752106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.764093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.764108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.776092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.776104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.788092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.788105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.800092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.800104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.812094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.812106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.818403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:11.733 [2024-12-09 11:19:12.824094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.824110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.836096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.836116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.848093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.848106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.860092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.860111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.870835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:11.733 [2024-12-09 11:19:12.872094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.872108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.884101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.884120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.733 [2024-12-09 11:19:12.896102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.733 [2024-12-09 11:19:12.896122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:12.908097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:12.908112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:12.920095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:12.920111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:12.932095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:12.932111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:12.944097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:12.944115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:12.956092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:12.956107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:12.968369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:12.968389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:12.980105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:12.980127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:12.996101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:12.996119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:13.008097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:13.008113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:13.020096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:13.020114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.992 [2024-12-09 11:19:13.066974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.992 [2024-12-09 11:19:13.066994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.993 [2024-12-09 11:19:13.076097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.993 [2024-12-09 11:19:13.076113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.993 Running I/O for 5 seconds... 01:08:11.993 [2024-12-09 11:19:13.093153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.993 [2024-12-09 11:19:13.093176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.993 [2024-12-09 11:19:13.108789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.993 [2024-12-09 11:19:13.108811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.993 [2024-12-09 11:19:13.124405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.993 [2024-12-09 11:19:13.124431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.993 [2024-12-09 11:19:13.140559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.993 [2024-12-09 11:19:13.140581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:11.993 [2024-12-09 11:19:13.156711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:11.993 [2024-12-09 11:19:13.156733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.173088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.173111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.189238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.189259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.205219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.205242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.220333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.220355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.232268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.232290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.246639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.246668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.261893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.261915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.277245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.277267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.293019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.293041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.308750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.308771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.323871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.323893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.340226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.340248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.351455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.351476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.368553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.368576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.384832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.384852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.400489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.400510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.252 [2024-12-09 11:19:13.416320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.252 [2024-12-09 11:19:13.416346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.428148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.428170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.441599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.441620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.456863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.456884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.471971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.471993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.483201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.483223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.498129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.498150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.515674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.515696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.532512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.532533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.548509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.548530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.564154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.564176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.575278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.575301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.591307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.591329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.608473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.608494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.624329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.624351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.638133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.638155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.653739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.653760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.669103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.511 [2024-12-09 11:19:13.669124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.511 [2024-12-09 11:19:13.684861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.512 [2024-12-09 11:19:13.684882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.770 [2024-12-09 11:19:13.700546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.770 [2024-12-09 11:19:13.700572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.770 [2024-12-09 11:19:13.715973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.770 [2024-12-09 11:19:13.715994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.770 [2024-12-09 11:19:13.732878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.770 [2024-12-09 11:19:13.732899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.748624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.748650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.764247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.764270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.777112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.777134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.793013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.793035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.809098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.809120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.825129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.825150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.840296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.840318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.853673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.853696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.869601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.869624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.885444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.885466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.900867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.900888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.916053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.916074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.929378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.929399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:12.771 [2024-12-09 11:19:13.944944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:12.771 [2024-12-09 11:19:13.944973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:13.960229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:13.960250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:13.971123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:13.971143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:13.986181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:13.986206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.000920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.000941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.016179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.016200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.027958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.027979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.044262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.044284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.054941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.054962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.070199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.070220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.085379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.085400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 15964.00 IOPS, 124.72 MiB/s [2024-12-09T10:19:14.206Z] [2024-12-09 11:19:14.100149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.100170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.112337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.112357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.125969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.125991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.141496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.141518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.156513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.156533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.171816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.171837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.188487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.188507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.030 [2024-12-09 11:19:14.204866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.030 [2024-12-09 11:19:14.204887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.220000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.220021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.232740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.232761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.248106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.248127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.260975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.260995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.276033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.276054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.286928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.286949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.301889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.301910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.317250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.317270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.335902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.335923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.350413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.350433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.368010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.368032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.382154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.382176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.397417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.397439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.412624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.412651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.428258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.428280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.440425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.440446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.289 [2024-12-09 11:19:14.456133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.289 [2024-12-09 11:19:14.456156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.553 [2024-12-09 11:19:14.467630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.553 [2024-12-09 11:19:14.467659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.553 [2024-12-09 11:19:14.483504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.553 [2024-12-09 11:19:14.483526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.553 [2024-12-09 11:19:14.500261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.553 [2024-12-09 11:19:14.500283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.553 [2024-12-09 11:19:14.513235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.553 [2024-12-09 11:19:14.513255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.553 [2024-12-09 11:19:14.528484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.553 [2024-12-09 11:19:14.528505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.553 [2024-12-09 11:19:14.543534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.553 [2024-12-09 11:19:14.543556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.553 [2024-12-09 11:19:14.560566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.553 [2024-12-09 11:19:14.560586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.553 [2024-12-09 11:19:14.576103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.553 [2024-12-09 11:19:14.576124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.553 [2024-12-09 11:19:14.588626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.554 [2024-12-09 11:19:14.588652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.554 [2024-12-09 11:19:14.604490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.554 [2024-12-09 11:19:14.604513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.554 [2024-12-09 11:19:14.620140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.554 [2024-12-09 11:19:14.620163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.554 [2024-12-09 11:19:14.633925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.554 [2024-12-09 11:19:14.633947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.554 [2024-12-09 11:19:14.649515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.554 [2024-12-09 11:19:14.649537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.554 [2024-12-09 11:19:14.665380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.554 [2024-12-09 11:19:14.665413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.554 [2024-12-09 11:19:14.680424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.554 [2024-12-09 11:19:14.680445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.554 [2024-12-09 11:19:14.696264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.554 [2024-12-09 11:19:14.696286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.554 [2024-12-09 11:19:14.709441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.554 [2024-12-09 11:19:14.709462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.554 [2024-12-09 11:19:14.724845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.554 [2024-12-09 11:19:14.724866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.739573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.739595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.756525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.756546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.771850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.771872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.788468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.788489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.804533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.804553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.820909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.820930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.836381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.836401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.852154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.852174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.865015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.865036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.880194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.880220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.891004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.891025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.907510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.907531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.923671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.923692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.939781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.939804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.956116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.956137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.969890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.969911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:13.820 [2024-12-09 11:19:14.985503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:13.820 [2024-12-09 11:19:14.985525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.079 [2024-12-09 11:19:15.000592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.000613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.015753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.015775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.031880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.031901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.048859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.048880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.064349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.064371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.077749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.077771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 16150.00 IOPS, 126.17 MiB/s [2024-12-09T10:19:15.256Z] [2024-12-09 11:19:15.093081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.093102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.108238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.108264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.119310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.119330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.134355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.134377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.149773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.149795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.165141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.165163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.180933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.180954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.195828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.195849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.212500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.212521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.228660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.228680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.080 [2024-12-09 11:19:15.244154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.080 [2024-12-09 11:19:15.244175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.255794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.255815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.272387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.272408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.288672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.288693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.304078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.304099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.314741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.314762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.330033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.330054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.345551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.345572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.360757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.360778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.375633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.375660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.392275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.392301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.405354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.405375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.420934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.420955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.435903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.435929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.452967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.452989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.468835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.468857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.484431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.484451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.339 [2024-12-09 11:19:15.499694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.339 [2024-12-09 11:19:15.499716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.516285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.516308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.529641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.529670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.545224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.545262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.560367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.560387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.576581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.576602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.592601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.592621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.607760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.607781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.624754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.624776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.640313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.640338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.650164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.650186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.665429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.665450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.680876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.680903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.696803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.696825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.712825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.712855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.728357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.728379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.740376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.740396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.757408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.757429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.598 [2024-12-09 11:19:15.772624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.598 [2024-12-09 11:19:15.772651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.787872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.787895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.804384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.804406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.820690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.820713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.836109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.836133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.846782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.846803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.862279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.862300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.877751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.877773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.893557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.893579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.908533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.908554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.924624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.924651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.940136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.940158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.954246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.954268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.969843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.969865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:15.985918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:15.985940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.857 [2024-12-09 11:19:16.004235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.857 [2024-12-09 11:19:16.004257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.858 [2024-12-09 11:19:16.016818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.858 [2024-12-09 11:19:16.016840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:14.858 [2024-12-09 11:19:16.032668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:14.858 [2024-12-09 11:19:16.032691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.117 [2024-12-09 11:19:16.048741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.117 [2024-12-09 11:19:16.048762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.117 [2024-12-09 11:19:16.064347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.117 [2024-12-09 11:19:16.064370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.117 [2024-12-09 11:19:16.075968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.117 [2024-12-09 11:19:16.075990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.117 [2024-12-09 11:19:16.091581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.117 [2024-12-09 11:19:16.091603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.117 16130.33 IOPS, 126.02 MiB/s [2024-12-09T10:19:16.293Z] [2024-12-09 11:19:16.107781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.117 [2024-12-09 11:19:16.107803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.117 [2024-12-09 11:19:16.124334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.117 [2024-12-09 11:19:16.124356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.117 [2024-12-09 11:19:16.138151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.117 [2024-12-09 11:19:16.138173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.117 [2024-12-09 11:19:16.154006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.117 [2024-12-09 11:19:16.154027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.117 [2024-12-09 11:19:16.169731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.117 [2024-12-09 11:19:16.169754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.117 [2024-12-09 11:19:16.185295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.117 [2024-12-09 11:19:16.185318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.118 [2024-12-09 11:19:16.201001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.118 [2024-12-09 11:19:16.201023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.118 [2024-12-09 11:19:16.216560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.118 [2024-12-09 11:19:16.216583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.118 [2024-12-09 11:19:16.232284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.118 [2024-12-09 11:19:16.232306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.118 [2024-12-09 11:19:16.243300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.118 [2024-12-09 11:19:16.243322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.118 [2024-12-09 11:19:16.258806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.118 [2024-12-09 11:19:16.258829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.118 [2024-12-09 11:19:16.276252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.118 [2024-12-09 11:19:16.276275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.118 [2024-12-09 11:19:16.289311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.118 [2024-12-09 11:19:16.289334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.305022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.305043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.320765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.320787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.336155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.336178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.350519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.350541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.366590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.366612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.384593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.384614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.399880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.399901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.416106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.416127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.430131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.430151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.445444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.445465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.464144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.464164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.477921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.477942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.493593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.493614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.508638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.508665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.523495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.523515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.377 [2024-12-09 11:19:16.540081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.377 [2024-12-09 11:19:16.540101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.553536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.553557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.569305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.569331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.584392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.584412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.599786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.599806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.615962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.615984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.632222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.632244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.645336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.645357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.660522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.660543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.676344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.676367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.689320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.689341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.704303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.704325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.715337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.715357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.729740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.729760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.745353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.745374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.760410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.760429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.776878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.776898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.792394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.792413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.637 [2024-12-09 11:19:16.808151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.637 [2024-12-09 11:19:16.808173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.822218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.822244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.837754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.837775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.852665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.852686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.868074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.868096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.879402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.879425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.894475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.894495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.911253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.911273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.925842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.925863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.941006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.941027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.956156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.956177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.966727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.966748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.982030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.982051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:16.997119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:16.997139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:17.012179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:17.012200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:17.023997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:17.024018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:17.038459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:17.038480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:17.055510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:17.055531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:15.897 [2024-12-09 11:19:17.070447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:15.897 [2024-12-09 11:19:17.070468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.156 [2024-12-09 11:19:17.085845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.156 [2024-12-09 11:19:17.085866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.156 16155.50 IOPS, 126.21 MiB/s [2024-12-09T10:19:17.332Z] [2024-12-09 11:19:17.100766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.156 [2024-12-09 11:19:17.100792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.156 [2024-12-09 11:19:17.116260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.156 [2024-12-09 11:19:17.116281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.156 [2024-12-09 11:19:17.127998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.156 [2024-12-09 11:19:17.128019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.156 [2024-12-09 11:19:17.142300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.156 [2024-12-09 11:19:17.142321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.156 [2024-12-09 11:19:17.157555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.156 [2024-12-09 11:19:17.157576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.156 [2024-12-09 11:19:17.172917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.156 [2024-12-09 11:19:17.172937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.156 [2024-12-09 11:19:17.188517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.156 [2024-12-09 11:19:17.188537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.156 [2024-12-09 11:19:17.204905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.156 [2024-12-09 11:19:17.204925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.157 [2024-12-09 11:19:17.220706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.157 [2024-12-09 11:19:17.220726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.157 [2024-12-09 11:19:17.235672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.157 [2024-12-09 11:19:17.235695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.157 [2024-12-09 11:19:17.252584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.157 [2024-12-09 11:19:17.252605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.157 [2024-12-09 11:19:17.267776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.157 [2024-12-09 11:19:17.267799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.157 [2024-12-09 11:19:17.284167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.157 [2024-12-09 11:19:17.284189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.157 [2024-12-09 11:19:17.298035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.157 [2024-12-09 11:19:17.298057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.157 [2024-12-09 11:19:17.313847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.157 [2024-12-09 11:19:17.313869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.157 [2024-12-09 11:19:17.329074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.157 [2024-12-09 11:19:17.329095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.416 [2024-12-09 11:19:17.343937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.343958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.360046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.360067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.374999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.375020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.391914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.391941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.408207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.408229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.420121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.420143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.433828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.433849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.449655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.449678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.464757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.464779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.479794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.479817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.496658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.496679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.512225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.512246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.526001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.526023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.541487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.541508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.556396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.556417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.572485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.572506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.417 [2024-12-09 11:19:17.588467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.417 [2024-12-09 11:19:17.588488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.603725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.603747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.620209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.620231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.634394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.634415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.649659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.649680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.667964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.667984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.683907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.683929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.699988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.700011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.712913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.712935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.727997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.728018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.741961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.741981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.757082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.757102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.771905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.771926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.788402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.788423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.804225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.804246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.817880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.817900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.833552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.833572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.677 [2024-12-09 11:19:17.849147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.677 [2024-12-09 11:19:17.849167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:17.864664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:17.864684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:17.880548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:17.880568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:17.895950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:17.895971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:17.911947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:17.911968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:17.928595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:17.928616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:17.944918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:17.944940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:17.960554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:17.960574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:17.975695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:17.975717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:17.992970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:17.992991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:18.008381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:18.008401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:18.024372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:18.024392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:18.040333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:18.040354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:18.050440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:18.050461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:18.066140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:18.066161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:18.081815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:18.081835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 [2024-12-09 11:19:18.096359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:18.096379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:16.937 16163.20 IOPS, 126.28 MiB/s 01:08:16.937 Latency(us) 01:08:16.937 [2024-12-09T10:19:18.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:16.937 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 01:08:16.937 Nvme1n1 : 5.01 16165.55 126.29 0.00 0.00 7909.95 2607.19 14816.83 01:08:16.937 [2024-12-09T10:19:18.113Z] =================================================================================================================== 01:08:16.937 [2024-12-09T10:19:18.113Z] Total : 16165.55 126.29 0.00 0.00 7909.95 2607.19 14816.83 01:08:16.937 [2024-12-09 11:19:18.108102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:16.937 [2024-12-09 11:19:18.108122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.120097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.120115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.132112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.132138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.144099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.144119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.156097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.156113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.168099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.168117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.180097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.180121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.192096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.192114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.204097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.204114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.216097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.216114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.228096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.228113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.240092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.240106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.252091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.252104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.264095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.264111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.276092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.276106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.288091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.288104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.300090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.300102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.312090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.312103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.324090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.324103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.336092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.336104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.348090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.348103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 [2024-12-09 11:19:18.360091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:08:17.197 [2024-12-09 11:19:18.360104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:17.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2609198) - No such process 01:08:17.197 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2609198 01:08:17.197 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:08:17.197 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:17.197 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:17.457 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:17.457 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:08:17.457 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:17.457 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:17.457 delay0 01:08:17.457 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:17.457 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 01:08:17.457 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:17.457 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:17.457 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:17.457 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 01:08:17.457 [2024-12-09 11:19:18.514370] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:08:24.028 Initializing NVMe Controllers 01:08:24.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:08:24.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:08:24.028 Initialization complete. Launching workers. 01:08:24.028 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 274, failed: 12695 01:08:24.028 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12889, failed to submit 80 01:08:24.028 success 12794, unsuccessful 95, failed 0 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:08:24.028 rmmod nvme_tcp 01:08:24.028 rmmod nvme_fabrics 01:08:24.028 rmmod nvme_keyring 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2607763 ']' 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2607763 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2607763 ']' 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2607763 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2607763 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2607763' 01:08:24.028 killing process with pid 2607763 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2607763 01:08:24.028 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2607763 01:08:24.029 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:08:24.029 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:08:24.029 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:08:24.029 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 01:08:24.287 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 01:08:24.287 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 01:08:24.287 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:08:24.287 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:08:24.287 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 01:08:24.287 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:24.287 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:24.287 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:26.192 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:08:26.192 01:08:26.192 real 0m33.120s 01:08:26.192 user 0m42.022s 01:08:26.192 sys 0m14.213s 01:08:26.192 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:26.192 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:08:26.192 ************************************ 01:08:26.192 END TEST nvmf_zcopy 01:08:26.192 ************************************ 01:08:26.192 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 01:08:26.192 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:08:26.192 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:08:26.192 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:08:26.192 ************************************ 01:08:26.192 START TEST nvmf_nmic 01:08:26.192 ************************************ 01:08:26.192 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 01:08:26.452 * Looking for test storage... 01:08:26.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:08:26.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:26.452 --rc genhtml_branch_coverage=1 01:08:26.452 --rc genhtml_function_coverage=1 01:08:26.452 --rc genhtml_legend=1 01:08:26.452 --rc geninfo_all_blocks=1 01:08:26.452 --rc geninfo_unexecuted_blocks=1 01:08:26.452 01:08:26.452 ' 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:08:26.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:26.452 --rc genhtml_branch_coverage=1 01:08:26.452 --rc genhtml_function_coverage=1 01:08:26.452 --rc genhtml_legend=1 01:08:26.452 --rc geninfo_all_blocks=1 01:08:26.452 --rc geninfo_unexecuted_blocks=1 01:08:26.452 01:08:26.452 ' 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:08:26.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:26.452 --rc genhtml_branch_coverage=1 01:08:26.452 --rc genhtml_function_coverage=1 01:08:26.452 --rc genhtml_legend=1 01:08:26.452 --rc geninfo_all_blocks=1 01:08:26.452 --rc geninfo_unexecuted_blocks=1 01:08:26.452 01:08:26.452 ' 01:08:26.452 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:08:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:26.453 --rc genhtml_branch_coverage=1 01:08:26.453 --rc genhtml_function_coverage=1 01:08:26.453 --rc genhtml_legend=1 01:08:26.453 --rc geninfo_all_blocks=1 01:08:26.453 --rc geninfo_unexecuted_blocks=1 01:08:26.453 01:08:26.453 ' 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 01:08:26.453 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:08:33.026 Found 0000:af:00.0 (0x8086 - 0x159b) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:08:33.026 Found 0000:af:00.1 (0x8086 - 0x159b) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:08:33.026 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:08:33.027 Found net devices under 0000:af:00.0: cvl_0_0 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:08:33.027 Found net devices under 0000:af:00.1: cvl_0_1 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:08:33.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:33.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 01:08:33.027 01:08:33.027 --- 10.0.0.2 ping statistics --- 01:08:33.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:33.027 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:08:33.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:33.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 01:08:33.027 01:08:33.027 --- 10.0.0.1 ping statistics --- 01:08:33.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:33.027 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2613832 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2613832 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2613832 ']' 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:33.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:33.027 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.027 [2024-12-09 11:19:33.849278] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:08:33.027 [2024-12-09 11:19:33.850268] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:08:33.027 [2024-12-09 11:19:33.850309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:33.027 [2024-12-09 11:19:33.963468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:08:33.027 [2024-12-09 11:19:34.022108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:33.027 [2024-12-09 11:19:34.022159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:33.027 [2024-12-09 11:19:34.022174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:33.027 [2024-12-09 11:19:34.022188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:33.027 [2024-12-09 11:19:34.022200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:33.027 [2024-12-09 11:19:34.023909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:33.027 [2024-12-09 11:19:34.023998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:08:33.027 [2024-12-09 11:19:34.024092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:08:33.027 [2024-12-09 11:19:34.024096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:33.027 [2024-12-09 11:19:34.101378] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:08:33.027 [2024-12-09 11:19:34.101610] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:08:33.027 [2024-12-09 11:19:34.101776] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:08:33.027 [2024-12-09 11:19:34.102186] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:08:33.027 [2024-12-09 11:19:34.102440] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.966 [2024-12-09 11:19:34.820964] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.966 Malloc0 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.966 [2024-12-09 11:19:34.893246] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 01:08:33.966 test case1: single bdev can't be used in multiple subsystems 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.966 [2024-12-09 11:19:34.916685] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 01:08:33.966 [2024-12-09 11:19:34.916717] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 01:08:33.966 [2024-12-09 11:19:34.916733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:08:33.966 request: 01:08:33.966 { 01:08:33.966 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:08:33.966 "namespace": { 01:08:33.966 "bdev_name": "Malloc0", 01:08:33.966 "no_auto_visible": false, 01:08:33.966 "hide_metadata": false 01:08:33.966 }, 01:08:33.966 "method": "nvmf_subsystem_add_ns", 01:08:33.966 "req_id": 1 01:08:33.966 } 01:08:33.966 Got JSON-RPC error response 01:08:33.966 response: 01:08:33.966 { 01:08:33.966 "code": -32602, 01:08:33.966 "message": "Invalid parameters" 01:08:33.966 } 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 01:08:33.966 Adding namespace failed - expected result. 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 01:08:33.966 test case2: host connect to nvmf target in multiple paths 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:33.966 [2024-12-09 11:19:34.928807] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:33.966 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:08:34.225 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 01:08:34.225 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 01:08:34.225 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 01:08:34.225 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:08:34.225 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:08:34.225 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 01:08:36.759 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:08:36.759 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:08:36.759 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:08:36.759 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:08:36.759 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:08:36.759 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 01:08:36.759 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:08:36.759 [global] 01:08:36.759 thread=1 01:08:36.759 invalidate=1 01:08:36.759 rw=write 01:08:36.759 time_based=1 01:08:36.759 runtime=1 01:08:36.759 ioengine=libaio 01:08:36.759 direct=1 01:08:36.759 bs=4096 01:08:36.759 iodepth=1 01:08:36.759 norandommap=0 01:08:36.759 numjobs=1 01:08:36.759 01:08:36.759 verify_dump=1 01:08:36.759 verify_backlog=512 01:08:36.759 verify_state_save=0 01:08:36.759 do_verify=1 01:08:36.759 verify=crc32c-intel 01:08:36.760 [job0] 01:08:36.760 filename=/dev/nvme0n1 01:08:36.760 Could not set queue depth (nvme0n1) 01:08:36.760 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:36.760 fio-3.35 01:08:36.760 Starting 1 thread 01:08:37.698 01:08:37.698 job0: (groupid=0, jobs=1): err= 0: pid=2614493: Mon Dec 9 11:19:38 2024 01:08:37.698 read: IOPS=2308, BW=9235KiB/s (9456kB/s)(9244KiB/1001msec) 01:08:37.698 slat (nsec): min=9850, max=44366, avg=11006.18, stdev=2163.74 01:08:37.698 clat (usec): min=183, max=450, avg=232.32, stdev=15.79 01:08:37.698 lat (usec): min=196, max=482, avg=243.32, stdev=16.13 01:08:37.698 clat percentiles (usec): 01:08:37.698 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 217], 20.00th=[ 221], 01:08:37.698 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 239], 01:08:37.698 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 251], 01:08:37.698 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 338], 99.95th=[ 424], 01:08:37.698 | 99.99th=[ 449] 01:08:37.698 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 01:08:37.698 slat (nsec): min=13077, max=48401, avg=14208.04, stdev=2301.40 01:08:37.698 clat (usec): min=119, max=319, avg=150.86, stdev= 8.24 01:08:37.698 lat (usec): min=142, max=363, avg=165.07, stdev= 8.93 01:08:37.698 clat percentiles (usec): 01:08:37.698 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 01:08:37.698 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 151], 60.00th=[ 153], 01:08:37.698 | 70.00th=[ 155], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 163], 01:08:37.698 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 245], 99.95th=[ 255], 01:08:37.698 | 99.99th=[ 318] 01:08:37.698 bw ( KiB/s): min=11624, max=11624, per=100.00%, avg=11624.00, stdev= 0.00, samples=1 01:08:37.698 iops : min= 2906, max= 2906, avg=2906.00, stdev= 0.00, samples=1 01:08:37.698 lat (usec) : 250=96.92%, 500=3.08% 01:08:37.698 cpu : usr=5.80%, sys=8.80%, ctx=4871, majf=0, minf=1 01:08:37.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:37.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:37.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:37.698 issued rwts: total=2311,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:37.698 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:37.698 01:08:37.698 Run status group 0 (all jobs): 01:08:37.698 READ: bw=9235KiB/s (9456kB/s), 9235KiB/s-9235KiB/s (9456kB/s-9456kB/s), io=9244KiB (9466kB), run=1001-1001msec 01:08:37.698 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 01:08:37.698 01:08:37.698 Disk stats (read/write): 01:08:37.698 nvme0n1: ios=2098/2299, merge=0/0, ticks=471/305, in_queue=776, util=91.38% 01:08:37.698 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:08:37.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:08:37.957 rmmod nvme_tcp 01:08:37.957 rmmod nvme_fabrics 01:08:37.957 rmmod nvme_keyring 01:08:37.957 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2613832 ']' 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2613832 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2613832 ']' 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2613832 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2613832 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2613832' 01:08:38.216 killing process with pid 2613832 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2613832 01:08:38.216 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2613832 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:38.476 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:40.383 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:08:40.383 01:08:40.383 real 0m14.192s 01:08:40.383 user 0m21.598s 01:08:40.383 sys 0m7.035s 01:08:40.383 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:40.383 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:08:40.383 ************************************ 01:08:40.384 END TEST nvmf_nmic 01:08:40.384 ************************************ 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:08:40.692 ************************************ 01:08:40.692 START TEST nvmf_fio_target 01:08:40.692 ************************************ 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 01:08:40.692 * Looking for test storage... 01:08:40.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 01:08:40.692 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:08:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:40.693 --rc genhtml_branch_coverage=1 01:08:40.693 --rc genhtml_function_coverage=1 01:08:40.693 --rc genhtml_legend=1 01:08:40.693 --rc geninfo_all_blocks=1 01:08:40.693 --rc geninfo_unexecuted_blocks=1 01:08:40.693 01:08:40.693 ' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:08:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:40.693 --rc genhtml_branch_coverage=1 01:08:40.693 --rc genhtml_function_coverage=1 01:08:40.693 --rc genhtml_legend=1 01:08:40.693 --rc geninfo_all_blocks=1 01:08:40.693 --rc geninfo_unexecuted_blocks=1 01:08:40.693 01:08:40.693 ' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:08:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:40.693 --rc genhtml_branch_coverage=1 01:08:40.693 --rc genhtml_function_coverage=1 01:08:40.693 --rc genhtml_legend=1 01:08:40.693 --rc geninfo_all_blocks=1 01:08:40.693 --rc geninfo_unexecuted_blocks=1 01:08:40.693 01:08:40.693 ' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:08:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:40.693 --rc genhtml_branch_coverage=1 01:08:40.693 --rc genhtml_function_coverage=1 01:08:40.693 --rc genhtml_legend=1 01:08:40.693 --rc geninfo_all_blocks=1 01:08:40.693 --rc geninfo_unexecuted_blocks=1 01:08:40.693 01:08:40.693 ' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:08:40.693 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:08:40.953 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:08:40.953 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:40.953 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:40.953 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:40.953 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:08:40.953 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:08:40.953 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 01:08:40.953 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:08:47.530 Found 0000:af:00.0 (0x8086 - 0x159b) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:08:47.530 Found 0000:af:00.1 (0x8086 - 0x159b) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:08:47.530 Found net devices under 0000:af:00.0: cvl_0_0 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:08:47.530 Found net devices under 0000:af:00.1: cvl_0_1 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:08:47.530 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:08:47.531 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:08:47.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:47.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 01:08:47.791 01:08:47.791 --- 10.0.0.2 ping statistics --- 01:08:47.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:47.791 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:08:47.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:47.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 01:08:47.791 01:08:47.791 --- 10.0.0.1 ping statistics --- 01:08:47.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:47.791 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2617906 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2617906 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2617906 ']' 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:47.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:47.791 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:08:47.791 [2024-12-09 11:19:48.877310] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:08:47.791 [2024-12-09 11:19:48.878796] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:08:47.791 [2024-12-09 11:19:48.878849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:48.051 [2024-12-09 11:19:49.010290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:08:48.051 [2024-12-09 11:19:49.061858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:48.051 [2024-12-09 11:19:49.061911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:48.051 [2024-12-09 11:19:49.061928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:48.051 [2024-12-09 11:19:49.061942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:48.051 [2024-12-09 11:19:49.061953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:48.051 [2024-12-09 11:19:49.063687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:48.051 [2024-12-09 11:19:49.063785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:08:48.051 [2024-12-09 11:19:49.063816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:08:48.051 [2024-12-09 11:19:49.063820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:48.051 [2024-12-09 11:19:49.142185] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:08:48.051 [2024-12-09 11:19:49.142342] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:08:48.051 [2024-12-09 11:19:49.142530] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:08:48.051 [2024-12-09 11:19:49.142950] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:08:48.051 [2024-12-09 11:19:49.143210] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:08:48.051 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:48.051 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 01:08:48.051 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:48.051 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:48.051 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:08:48.051 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:48.051 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:08:48.311 [2024-12-09 11:19:49.404747] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:48.311 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:08:48.571 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 01:08:48.571 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:08:49.140 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 01:08:49.140 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:08:49.140 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 01:08:49.140 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:08:49.400 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 01:08:49.400 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 01:08:49.659 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:08:49.919 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 01:08:49.919 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:08:50.178 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 01:08:50.178 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:08:50.747 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 01:08:50.747 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 01:08:50.747 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:08:51.007 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:08:51.007 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:08:51.267 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:08:51.267 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:08:51.527 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:51.786 [2024-12-09 11:19:52.804726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:51.786 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 01:08:52.044 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 01:08:52.303 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:08:52.563 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 01:08:52.563 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 01:08:52.563 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:08:52.563 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 01:08:52.563 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 01:08:52.563 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 01:08:54.469 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:08:54.469 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:08:54.469 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:08:54.469 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 01:08:54.469 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:08:54.469 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 01:08:54.469 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:08:54.469 [global] 01:08:54.469 thread=1 01:08:54.469 invalidate=1 01:08:54.469 rw=write 01:08:54.469 time_based=1 01:08:54.469 runtime=1 01:08:54.469 ioengine=libaio 01:08:54.469 direct=1 01:08:54.469 bs=4096 01:08:54.469 iodepth=1 01:08:54.469 norandommap=0 01:08:54.469 numjobs=1 01:08:54.469 01:08:54.469 verify_dump=1 01:08:54.469 verify_backlog=512 01:08:54.469 verify_state_save=0 01:08:54.469 do_verify=1 01:08:54.469 verify=crc32c-intel 01:08:54.469 [job0] 01:08:54.469 filename=/dev/nvme0n1 01:08:54.469 [job1] 01:08:54.469 filename=/dev/nvme0n2 01:08:54.469 [job2] 01:08:54.469 filename=/dev/nvme0n3 01:08:54.469 [job3] 01:08:54.469 filename=/dev/nvme0n4 01:08:54.727 Could not set queue depth (nvme0n1) 01:08:54.727 Could not set queue depth (nvme0n2) 01:08:54.727 Could not set queue depth (nvme0n3) 01:08:54.727 Could not set queue depth (nvme0n4) 01:08:54.984 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:54.984 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:54.984 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:54.984 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:54.984 fio-3.35 01:08:54.984 Starting 4 threads 01:08:56.355 01:08:56.355 job0: (groupid=0, jobs=1): err= 0: pid=2618963: Mon Dec 9 11:19:57 2024 01:08:56.355 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 01:08:56.355 slat (nsec): min=13039, max=29970, avg=27642.86, stdev=3447.89 01:08:56.355 clat (usec): min=40596, max=41027, avg=40945.18, stdev=88.71 01:08:56.355 lat (usec): min=40609, max=41056, avg=40972.82, stdev=91.80 01:08:56.355 clat percentiles (usec): 01:08:56.355 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 01:08:56.355 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:08:56.355 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 01:08:56.355 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 01:08:56.355 | 99.99th=[41157] 01:08:56.355 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 01:08:56.356 slat (usec): min=13, max=41098, avg=95.78, stdev=1815.64 01:08:56.356 clat (usec): min=135, max=431, avg=174.75, stdev=19.22 01:08:56.356 lat (usec): min=150, max=41306, avg=270.54, stdev=1817.23 01:08:56.356 clat percentiles (usec): 01:08:56.356 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 01:08:56.356 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 01:08:56.356 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 204], 01:08:56.356 | 99.00th=[ 235], 99.50th=[ 245], 99.90th=[ 433], 99.95th=[ 433], 01:08:56.356 | 99.99th=[ 433] 01:08:56.356 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=1 01:08:56.356 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 01:08:56.356 lat (usec) : 250=95.68%, 500=0.38% 01:08:56.356 lat (msec) : 50=3.94% 01:08:56.356 cpu : usr=0.90%, sys=0.80%, ctx=535, majf=0, minf=1 01:08:56.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:56.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:56.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:56.356 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:56.356 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:56.356 job1: (groupid=0, jobs=1): err= 0: pid=2618964: Mon Dec 9 11:19:57 2024 01:08:56.356 read: IOPS=799, BW=3199KiB/s (3276kB/s)(3260KiB/1019msec) 01:08:56.356 slat (nsec): min=10616, max=92576, avg=11892.98, stdev=4325.25 01:08:56.356 clat (usec): min=199, max=41064, avg=985.94, stdev=5476.56 01:08:56.356 lat (usec): min=218, max=41091, avg=997.84, stdev=5478.43 01:08:56.356 clat percentiles (usec): 01:08:56.356 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 225], 01:08:56.356 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 237], 01:08:56.356 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 265], 01:08:56.356 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 01:08:56.356 | 99.99th=[41157] 01:08:56.356 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 01:08:56.356 slat (nsec): min=13969, max=50880, avg=15090.29, stdev=2291.31 01:08:56.356 clat (usec): min=129, max=1863, avg=179.72, stdev=66.16 01:08:56.356 lat (usec): min=154, max=1878, avg=194.81, stdev=66.39 01:08:56.356 clat percentiles (usec): 01:08:56.356 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 01:08:56.356 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 169], 60.00th=[ 176], 01:08:56.356 | 70.00th=[ 182], 80.00th=[ 196], 90.00th=[ 237], 95.00th=[ 289], 01:08:56.356 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 367], 99.95th=[ 1860], 01:08:56.356 | 99.99th=[ 1860] 01:08:56.356 bw ( KiB/s): min= 8192, max= 8192, per=81.60%, avg=8192.00, stdev= 0.00, samples=1 01:08:56.356 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:08:56.356 lat (usec) : 250=89.29%, 500=9.73%, 750=0.11% 01:08:56.356 lat (msec) : 2=0.05%, 50=0.82% 01:08:56.356 cpu : usr=1.08%, sys=4.52%, ctx=1840, majf=0, minf=1 01:08:56.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:56.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:56.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:56.356 issued rwts: total=815,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:56.356 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:56.356 job2: (groupid=0, jobs=1): err= 0: pid=2618965: Mon Dec 9 11:19:57 2024 01:08:56.356 read: IOPS=294, BW=1177KiB/s (1205kB/s)(1196KiB/1016msec) 01:08:56.356 slat (nsec): min=10505, max=31401, avg=12648.00, stdev=4099.99 01:08:56.356 clat (usec): min=268, max=41084, avg=3026.15, stdev=10163.57 01:08:56.356 lat (usec): min=279, max=41100, avg=3038.80, stdev=10166.83 01:08:56.356 clat percentiles (usec): 01:08:56.356 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 285], 01:08:56.356 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 01:08:56.356 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 392], 95.00th=[41157], 01:08:56.356 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 01:08:56.356 | 99.99th=[41157] 01:08:56.356 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 01:08:56.356 slat (nsec): min=13328, max=52751, avg=14821.06, stdev=3085.47 01:08:56.356 clat (usec): min=145, max=887, avg=188.76, stdev=49.42 01:08:56.356 lat (usec): min=159, max=906, avg=203.59, stdev=50.42 01:08:56.356 clat percentiles (usec): 01:08:56.356 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 172], 01:08:56.356 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 01:08:56.356 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 225], 95.00th=[ 245], 01:08:56.356 | 99.00th=[ 322], 99.50th=[ 486], 99.90th=[ 889], 99.95th=[ 889], 01:08:56.356 | 99.99th=[ 889] 01:08:56.356 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=1 01:08:56.356 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 01:08:56.356 lat (usec) : 250=60.91%, 500=36.13%, 750=0.25%, 1000=0.25% 01:08:56.356 lat (msec) : 50=2.47% 01:08:56.356 cpu : usr=1.77%, sys=0.79%, ctx=811, majf=0, minf=2 01:08:56.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:56.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:56.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:56.356 issued rwts: total=299,512,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:56.356 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:56.356 job3: (groupid=0, jobs=1): err= 0: pid=2618966: Mon Dec 9 11:19:57 2024 01:08:56.356 read: IOPS=17, BW=70.6KiB/s (72.3kB/s)(72.0KiB/1020msec) 01:08:56.356 slat (nsec): min=13097, max=29000, avg=26894.11, stdev=3614.38 01:08:56.356 clat (usec): min=40864, max=41303, avg=40972.99, stdev=104.33 01:08:56.356 lat (usec): min=40892, max=41316, avg=40999.89, stdev=101.63 01:08:56.356 clat percentiles (usec): 01:08:56.356 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 01:08:56.356 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:08:56.356 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 01:08:56.356 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 01:08:56.356 | 99.99th=[41157] 01:08:56.356 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 01:08:56.356 slat (usec): min=4, max=39174, avg=373.67, stdev=3412.39 01:08:56.356 clat (usec): min=127, max=830, avg=173.29, stdev=41.81 01:08:56.356 lat (usec): min=132, max=39447, avg=546.95, stdev=3421.04 01:08:56.356 clat percentiles (usec): 01:08:56.356 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 151], 01:08:56.356 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 178], 01:08:56.356 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 206], 01:08:56.356 | 99.00th=[ 285], 99.50th=[ 379], 99.90th=[ 832], 99.95th=[ 832], 01:08:56.356 | 99.99th=[ 832] 01:08:56.356 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=1 01:08:56.356 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 01:08:56.356 lat (usec) : 250=94.72%, 500=1.70%, 1000=0.19% 01:08:56.356 lat (msec) : 50=3.40% 01:08:56.356 cpu : usr=0.69%, sys=0.49%, ctx=537, majf=0, minf=1 01:08:56.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:56.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:56.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:56.356 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:56.356 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:56.356 01:08:56.356 Run status group 0 (all jobs): 01:08:56.356 READ: bw=4522KiB/s (4630kB/s), 70.6KiB/s-3199KiB/s (72.3kB/s-3276kB/s), io=4612KiB (4723kB), run=1001-1020msec 01:08:56.356 WRITE: bw=9.80MiB/s (10.3MB/s), 2008KiB/s-4020KiB/s (2056kB/s-4116kB/s), io=10.0MiB (10.5MB), run=1001-1020msec 01:08:56.356 01:08:56.356 Disk stats (read/write): 01:08:56.356 nvme0n1: ios=71/512, merge=0/0, ticks=986/80, in_queue=1066, util=97.90% 01:08:56.356 nvme0n2: ios=860/1024, merge=0/0, ticks=944/187, in_queue=1131, util=98.35% 01:08:56.356 nvme0n3: ios=294/512, merge=0/0, ticks=696/94, in_queue=790, util=87.69% 01:08:56.356 nvme0n4: ios=47/512, merge=0/0, ticks=1287/79, in_queue=1366, util=98.90% 01:08:56.356 11:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 01:08:56.356 [global] 01:08:56.356 thread=1 01:08:56.356 invalidate=1 01:08:56.356 rw=randwrite 01:08:56.356 time_based=1 01:08:56.356 runtime=1 01:08:56.356 ioengine=libaio 01:08:56.356 direct=1 01:08:56.356 bs=4096 01:08:56.356 iodepth=1 01:08:56.356 norandommap=0 01:08:56.356 numjobs=1 01:08:56.356 01:08:56.356 verify_dump=1 01:08:56.356 verify_backlog=512 01:08:56.356 verify_state_save=0 01:08:56.356 do_verify=1 01:08:56.356 verify=crc32c-intel 01:08:56.356 [job0] 01:08:56.356 filename=/dev/nvme0n1 01:08:56.356 [job1] 01:08:56.356 filename=/dev/nvme0n2 01:08:56.356 [job2] 01:08:56.356 filename=/dev/nvme0n3 01:08:56.356 [job3] 01:08:56.356 filename=/dev/nvme0n4 01:08:56.356 Could not set queue depth (nvme0n1) 01:08:56.356 Could not set queue depth (nvme0n2) 01:08:56.356 Could not set queue depth (nvme0n3) 01:08:56.356 Could not set queue depth (nvme0n4) 01:08:56.614 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:56.614 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:56.614 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:56.614 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:56.614 fio-3.35 01:08:56.614 Starting 4 threads 01:08:57.991 01:08:57.991 job0: (groupid=0, jobs=1): err= 0: pid=2619259: Mon Dec 9 11:19:58 2024 01:08:57.991 read: IOPS=911, BW=3646KiB/s (3733kB/s)(3744KiB/1027msec) 01:08:57.991 slat (nsec): min=8638, max=27601, avg=9580.80, stdev=1471.39 01:08:57.991 clat (usec): min=234, max=41098, avg=852.66, stdev=4762.78 01:08:57.991 lat (usec): min=243, max=41114, avg=862.24, stdev=4763.71 01:08:57.991 clat percentiles (usec): 01:08:57.991 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 269], 01:08:57.991 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 01:08:57.991 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 379], 01:08:57.991 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 01:08:57.991 | 99.99th=[41157] 01:08:57.991 write: IOPS=997, BW=3988KiB/s (4084kB/s)(4096KiB/1027msec); 0 zone resets 01:08:57.991 slat (nsec): min=11378, max=38180, avg=12697.53, stdev=2021.96 01:08:57.991 clat (usec): min=152, max=313, avg=196.77, stdev=20.33 01:08:57.991 lat (usec): min=163, max=351, avg=209.47, stdev=20.84 01:08:57.991 clat percentiles (usec): 01:08:57.991 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 178], 01:08:57.991 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 198], 60.00th=[ 204], 01:08:57.991 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 229], 01:08:57.991 | 99.00th=[ 243], 99.50th=[ 260], 99.90th=[ 293], 99.95th=[ 314], 01:08:57.991 | 99.99th=[ 314] 01:08:57.991 bw ( KiB/s): min= 8192, max= 8192, per=37.45%, avg=8192.00, stdev= 0.00, samples=1 01:08:57.991 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:08:57.991 lat (usec) : 250=52.50%, 500=46.84% 01:08:57.991 lat (msec) : 50=0.66% 01:08:57.991 cpu : usr=0.88%, sys=2.63%, ctx=1960, majf=0, minf=1 01:08:57.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:57.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:57.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:57.991 issued rwts: total=936,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:57.991 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:57.991 job1: (groupid=0, jobs=1): err= 0: pid=2619260: Mon Dec 9 11:19:58 2024 01:08:57.991 read: IOPS=2206, BW=8827KiB/s (9039kB/s)(8836KiB/1001msec) 01:08:57.991 slat (nsec): min=8884, max=41515, avg=10040.52, stdev=1602.18 01:08:57.991 clat (usec): min=179, max=437, avg=239.50, stdev=31.29 01:08:57.991 lat (usec): min=188, max=448, avg=249.54, stdev=31.25 01:08:57.991 clat percentiles (usec): 01:08:57.991 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 01:08:57.991 | 30.00th=[ 217], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 245], 01:08:57.991 | 70.00th=[ 249], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 289], 01:08:57.991 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 338], 99.95th=[ 408], 01:08:57.991 | 99.99th=[ 437] 01:08:57.991 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 01:08:57.991 slat (nsec): min=11978, max=42722, avg=13778.97, stdev=2697.76 01:08:57.991 clat (usec): min=117, max=712, avg=156.18, stdev=25.23 01:08:57.991 lat (usec): min=131, max=724, avg=169.96, stdev=25.55 01:08:57.991 clat percentiles (usec): 01:08:57.991 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 139], 01:08:57.991 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 159], 01:08:57.991 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 196], 01:08:57.991 | 99.00th=[ 239], 99.50th=[ 241], 99.90th=[ 249], 99.95th=[ 392], 01:08:57.991 | 99.99th=[ 709] 01:08:57.991 bw ( KiB/s): min= 9536, max= 9536, per=43.60%, avg=9536.00, stdev= 0.00, samples=1 01:08:57.991 iops : min= 2384, max= 2384, avg=2384.00, stdev= 0.00, samples=1 01:08:57.991 lat (usec) : 250=86.94%, 500=13.04%, 750=0.02% 01:08:57.991 cpu : usr=4.60%, sys=4.90%, ctx=4773, majf=0, minf=1 01:08:57.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:57.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:57.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:57.991 issued rwts: total=2209,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:57.991 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:57.991 job2: (groupid=0, jobs=1): err= 0: pid=2619263: Mon Dec 9 11:19:58 2024 01:08:57.991 read: IOPS=1186, BW=4746KiB/s (4860kB/s)(4888KiB/1030msec) 01:08:57.991 slat (nsec): min=8773, max=29164, avg=9944.98, stdev=2101.08 01:08:57.991 clat (usec): min=202, max=41400, avg=583.78, stdev=3670.68 01:08:57.991 lat (usec): min=214, max=41410, avg=593.73, stdev=3671.31 01:08:57.991 clat percentiles (usec): 01:08:57.991 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 01:08:57.991 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 241], 01:08:57.991 | 70.00th=[ 260], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 330], 01:08:57.991 | 99.00th=[ 445], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 01:08:57.991 | 99.99th=[41157] 01:08:57.991 write: IOPS=1491, BW=5965KiB/s (6108kB/s)(6144KiB/1030msec); 0 zone resets 01:08:57.991 slat (nsec): min=11483, max=44045, avg=12691.16, stdev=1782.87 01:08:57.991 clat (usec): min=139, max=368, avg=180.55, stdev=27.32 01:08:57.991 lat (usec): min=151, max=380, avg=193.24, stdev=27.57 01:08:57.991 clat percentiles (usec): 01:08:57.991 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 01:08:57.991 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 186], 01:08:57.991 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 223], 01:08:57.991 | 99.00th=[ 265], 99.50th=[ 314], 99.90th=[ 363], 99.95th=[ 367], 01:08:57.991 | 99.99th=[ 367] 01:08:57.991 bw ( KiB/s): min= 3040, max= 9248, per=28.09%, avg=6144.00, stdev=4389.72, samples=2 01:08:57.991 iops : min= 760, max= 2312, avg=1536.00, stdev=1097.43, samples=2 01:08:57.991 lat (usec) : 250=84.12%, 500=15.48% 01:08:57.991 lat (msec) : 2=0.04%, 50=0.36% 01:08:57.991 cpu : usr=2.43%, sys=2.72%, ctx=2758, majf=0, minf=1 01:08:57.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:57.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:57.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:57.991 issued rwts: total=1222,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:57.991 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:57.991 job3: (groupid=0, jobs=1): err= 0: pid=2619264: Mon Dec 9 11:19:58 2024 01:08:57.991 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 01:08:57.991 slat (nsec): min=12809, max=29103, avg=26175.95, stdev=3170.46 01:08:57.991 clat (usec): min=40904, max=41114, avg=40975.51, stdev=45.84 01:08:57.991 lat (usec): min=40931, max=41127, avg=41001.69, stdev=43.82 01:08:57.991 clat percentiles (usec): 01:08:57.991 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 01:08:57.991 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:08:57.991 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 01:08:57.991 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 01:08:57.991 | 99.99th=[41157] 01:08:57.991 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 01:08:57.991 slat (nsec): min=11544, max=39596, avg=12849.62, stdev=1925.00 01:08:57.991 clat (usec): min=167, max=393, avg=204.60, stdev=28.24 01:08:57.991 lat (usec): min=180, max=407, avg=217.45, stdev=28.57 01:08:57.991 clat percentiles (usec): 01:08:57.991 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188], 01:08:57.991 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 01:08:57.991 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 245], 01:08:57.991 | 99.00th=[ 338], 99.50th=[ 383], 99.90th=[ 396], 99.95th=[ 396], 01:08:57.991 | 99.99th=[ 396] 01:08:57.991 bw ( KiB/s): min= 4096, max= 4096, per=18.73%, avg=4096.00, stdev= 0.00, samples=1 01:08:57.991 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 01:08:57.991 lat (usec) : 250=91.39%, 500=4.49% 01:08:57.991 lat (msec) : 50=4.12% 01:08:57.991 cpu : usr=0.30%, sys=0.79%, ctx=534, majf=0, minf=1 01:08:57.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:57.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:57.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:57.991 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:57.991 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:57.991 01:08:57.991 Run status group 0 (all jobs): 01:08:57.991 READ: bw=16.6MiB/s (17.5MB/s), 86.7KiB/s-8827KiB/s (88.8kB/s-9039kB/s), io=17.1MiB (18.0MB), run=1001-1030msec 01:08:57.991 WRITE: bw=21.4MiB/s (22.4MB/s), 2018KiB/s-9.99MiB/s (2066kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1030msec 01:08:57.991 01:08:57.991 Disk stats (read/write): 01:08:57.991 nvme0n1: ios=980/1024, merge=0/0, ticks=577/192, in_queue=769, util=82.16% 01:08:57.991 nvme0n2: ios=1742/2048, merge=0/0, ticks=628/313, in_queue=941, util=96.29% 01:08:57.991 nvme0n3: ios=1266/1536, merge=0/0, ticks=491/261, in_queue=752, util=89.13% 01:08:57.991 nvme0n4: ios=45/512, merge=0/0, ticks=834/99, in_queue=933, util=94.13% 01:08:57.991 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 01:08:57.991 [global] 01:08:57.991 thread=1 01:08:57.991 invalidate=1 01:08:57.991 rw=write 01:08:57.991 time_based=1 01:08:57.991 runtime=1 01:08:57.991 ioengine=libaio 01:08:57.991 direct=1 01:08:57.991 bs=4096 01:08:57.991 iodepth=128 01:08:57.991 norandommap=0 01:08:57.991 numjobs=1 01:08:57.991 01:08:57.992 verify_dump=1 01:08:57.992 verify_backlog=512 01:08:57.992 verify_state_save=0 01:08:57.992 do_verify=1 01:08:57.992 verify=crc32c-intel 01:08:57.992 [job0] 01:08:57.992 filename=/dev/nvme0n1 01:08:57.992 [job1] 01:08:57.992 filename=/dev/nvme0n2 01:08:57.992 [job2] 01:08:57.992 filename=/dev/nvme0n3 01:08:57.992 [job3] 01:08:57.992 filename=/dev/nvme0n4 01:08:57.992 Could not set queue depth (nvme0n1) 01:08:57.992 Could not set queue depth (nvme0n2) 01:08:57.992 Could not set queue depth (nvme0n3) 01:08:57.992 Could not set queue depth (nvme0n4) 01:08:58.249 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:58.249 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:58.249 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:58.249 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:58.249 fio-3.35 01:08:58.249 Starting 4 threads 01:08:59.620 01:08:59.620 job0: (groupid=0, jobs=1): err= 0: pid=2619569: Mon Dec 9 11:20:00 2024 01:08:59.620 read: IOPS=3718, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1003msec) 01:08:59.620 slat (usec): min=2, max=10716, avg=69.53, stdev=546.63 01:08:59.620 clat (usec): min=996, max=56962, avg=12455.15, stdev=4495.76 01:08:59.620 lat (usec): min=1533, max=61163, avg=12524.68, stdev=4534.07 01:08:59.620 clat percentiles (usec): 01:08:59.620 | 1.00th=[ 5145], 5.00th=[ 7242], 10.00th=[ 9241], 20.00th=[ 9765], 01:08:59.620 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11863], 60.00th=[12387], 01:08:59.620 | 70.00th=[12649], 80.00th=[13829], 90.00th=[17171], 95.00th=[21627], 01:08:59.620 | 99.00th=[26608], 99.50th=[28967], 99.90th=[56886], 99.95th=[56886], 01:08:59.620 | 99.99th=[56886] 01:08:59.620 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 01:08:59.620 slat (usec): min=4, max=41443, avg=138.95, stdev=1129.88 01:08:59.620 clat (usec): min=1238, max=87837, avg=17518.37, stdev=14048.71 01:08:59.620 lat (usec): min=1281, max=87861, avg=17657.32, stdev=14174.55 01:08:59.620 clat percentiles (usec): 01:08:59.620 | 1.00th=[ 5407], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 8586], 01:08:59.620 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[12780], 60.00th=[13566], 01:08:59.620 | 70.00th=[15795], 80.00th=[26084], 90.00th=[34866], 95.00th=[45351], 01:08:59.620 | 99.00th=[74974], 99.50th=[78119], 99.90th=[87557], 99.95th=[87557], 01:08:59.620 | 99.99th=[87557] 01:08:59.620 bw ( KiB/s): min=14712, max=18056, per=25.23%, avg=16384.00, stdev=2364.57, samples=2 01:08:59.620 iops : min= 3678, max= 4514, avg=4096.00, stdev=591.14, samples=2 01:08:59.620 lat (usec) : 1000=0.01% 01:08:59.620 lat (msec) : 2=0.19%, 4=0.33%, 10=27.83%, 20=55.47%, 50=14.11% 01:08:59.620 lat (msec) : 100=2.06% 01:08:59.620 cpu : usr=4.79%, sys=6.29%, ctx=363, majf=0, minf=1 01:08:59.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 01:08:59.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:59.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:59.620 issued rwts: total=3730,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:59.620 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:59.620 job1: (groupid=0, jobs=1): err= 0: pid=2619580: Mon Dec 9 11:20:00 2024 01:08:59.620 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 01:08:59.620 slat (usec): min=2, max=18728, avg=130.43, stdev=933.47 01:08:59.620 clat (usec): min=3365, max=72306, avg=17104.22, stdev=11322.95 01:08:59.620 lat (usec): min=3372, max=72310, avg=17234.65, stdev=11414.90 01:08:59.620 clat percentiles (usec): 01:08:59.620 | 1.00th=[ 3556], 5.00th=[ 3916], 10.00th=[ 9110], 20.00th=[10159], 01:08:59.620 | 30.00th=[10814], 40.00th=[11338], 50.00th=[12649], 60.00th=[14091], 01:08:59.620 | 70.00th=[17433], 80.00th=[23725], 90.00th=[36439], 95.00th=[40633], 01:08:59.620 | 99.00th=[55313], 99.50th=[61604], 99.90th=[71828], 99.95th=[71828], 01:08:59.620 | 99.99th=[71828] 01:08:59.620 write: IOPS=3526, BW=13.8MiB/s (14.4MB/s)(13.9MiB/1007msec); 0 zone resets 01:08:59.620 slat (usec): min=3, max=18389, avg=156.97, stdev=1053.95 01:08:59.620 clat (usec): min=1520, max=74918, avg=21079.04, stdev=13426.15 01:08:59.620 lat (usec): min=1535, max=74923, avg=21236.01, stdev=13501.27 01:08:59.620 clat percentiles (usec): 01:08:59.620 | 1.00th=[ 3163], 5.00th=[ 7767], 10.00th=[ 9372], 20.00th=[ 9634], 01:08:59.621 | 30.00th=[10421], 40.00th=[16450], 50.00th=[17957], 60.00th=[21890], 01:08:59.621 | 70.00th=[25035], 80.00th=[29492], 90.00th=[36963], 95.00th=[50070], 01:08:59.621 | 99.00th=[69731], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 01:08:59.621 | 99.99th=[74974] 01:08:59.621 bw ( KiB/s): min=11000, max=16384, per=21.08%, avg=13692.00, stdev=3807.06, samples=2 01:08:59.621 iops : min= 2750, max= 4096, avg=3423.00, stdev=951.77, samples=2 01:08:59.621 lat (msec) : 2=0.29%, 4=2.87%, 10=16.74%, 20=44.21%, 50=32.43% 01:08:59.621 lat (msec) : 100=3.46% 01:08:59.621 cpu : usr=2.49%, sys=2.68%, ctx=301, majf=0, minf=1 01:08:59.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 01:08:59.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:59.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:59.621 issued rwts: total=3072,3551,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:59.621 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:59.621 job2: (groupid=0, jobs=1): err= 0: pid=2619595: Mon Dec 9 11:20:00 2024 01:08:59.621 read: IOPS=5253, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1002msec) 01:08:59.621 slat (usec): min=2, max=9481, avg=90.82, stdev=551.41 01:08:59.621 clat (usec): min=1309, max=49070, avg=11949.51, stdev=4293.05 01:08:59.621 lat (usec): min=4811, max=49096, avg=12040.33, stdev=4319.51 01:08:59.621 clat percentiles (usec): 01:08:59.621 | 1.00th=[ 5604], 5.00th=[ 7504], 10.00th=[ 8225], 20.00th=[ 9110], 01:08:59.621 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[11207], 60.00th=[11731], 01:08:59.621 | 70.00th=[12256], 80.00th=[14091], 90.00th=[16909], 95.00th=[21365], 01:08:59.621 | 99.00th=[26346], 99.50th=[28705], 99.90th=[49021], 99.95th=[49021], 01:08:59.621 | 99.99th=[49021] 01:08:59.621 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 01:08:59.621 slat (usec): min=4, max=10748, avg=82.48, stdev=494.87 01:08:59.621 clat (usec): min=5162, max=31016, avg=11279.14, stdev=2752.33 01:08:59.621 lat (usec): min=5175, max=31034, avg=11361.62, stdev=2800.01 01:08:59.621 clat percentiles (usec): 01:08:59.621 | 1.00th=[ 7504], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 8848], 01:08:59.621 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[11076], 60.00th=[11600], 01:08:59.621 | 70.00th=[11994], 80.00th=[12649], 90.00th=[14484], 95.00th=[17695], 01:08:59.621 | 99.00th=[20579], 99.50th=[20579], 99.90th=[21627], 99.95th=[25297], 01:08:59.621 | 99.99th=[31065] 01:08:59.621 bw ( KiB/s): min=22184, max=22872, per=34.69%, avg=22528.00, stdev=486.49, samples=2 01:08:59.621 iops : min= 5546, max= 5718, avg=5632.00, stdev=121.62, samples=2 01:08:59.621 lat (msec) : 2=0.01%, 10=39.43%, 20=55.73%, 50=4.84% 01:08:59.621 cpu : usr=7.59%, sys=9.29%, ctx=409, majf=0, minf=1 01:08:59.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 01:08:59.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:59.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:59.621 issued rwts: total=5264,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:59.621 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:59.621 job3: (groupid=0, jobs=1): err= 0: pid=2619601: Mon Dec 9 11:20:00 2024 01:08:59.621 read: IOPS=2893, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1005msec) 01:08:59.621 slat (usec): min=3, max=16142, avg=135.55, stdev=881.27 01:08:59.621 clat (usec): min=2555, max=52599, avg=16180.20, stdev=7319.99 01:08:59.621 lat (usec): min=2887, max=52608, avg=16315.74, stdev=7390.32 01:08:59.621 clat percentiles (usec): 01:08:59.621 | 1.00th=[ 8094], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[11994], 01:08:59.621 | 30.00th=[12780], 40.00th=[13304], 50.00th=[14222], 60.00th=[15270], 01:08:59.621 | 70.00th=[16909], 80.00th=[18482], 90.00th=[23987], 95.00th=[31851], 01:08:59.621 | 99.00th=[46924], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 01:08:59.621 | 99.99th=[52691] 01:08:59.621 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 01:08:59.621 slat (usec): min=4, max=11696, avg=181.44, stdev=898.65 01:08:59.621 clat (usec): min=1887, max=83631, avg=26189.91, stdev=19401.32 01:08:59.621 lat (usec): min=1905, max=83641, avg=26371.35, stdev=19527.51 01:08:59.621 clat percentiles (usec): 01:08:59.621 | 1.00th=[ 5473], 5.00th=[ 7373], 10.00th=[ 7767], 20.00th=[11338], 01:08:59.621 | 30.00th=[13829], 40.00th=[16319], 50.00th=[17695], 60.00th=[21627], 01:08:59.621 | 70.00th=[29754], 80.00th=[40633], 90.00th=[57934], 95.00th=[69731], 01:08:59.621 | 99.00th=[82314], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 01:08:59.621 | 99.99th=[83362] 01:08:59.621 bw ( KiB/s): min=11792, max=12784, per=18.92%, avg=12288.00, stdev=701.45, samples=2 01:08:59.621 iops : min= 2948, max= 3196, avg=3072.00, stdev=175.36, samples=2 01:08:59.621 lat (msec) : 2=0.03%, 4=0.30%, 10=11.47%, 20=57.06%, 50=23.21% 01:08:59.621 lat (msec) : 100=7.93% 01:08:59.621 cpu : usr=5.18%, sys=4.68%, ctx=315, majf=0, minf=2 01:08:59.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 01:08:59.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:59.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:59.621 issued rwts: total=2908,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:59.621 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:59.621 01:08:59.621 Run status group 0 (all jobs): 01:08:59.621 READ: bw=58.1MiB/s (60.9MB/s), 11.3MiB/s-20.5MiB/s (11.9MB/s-21.5MB/s), io=58.5MiB (61.3MB), run=1002-1007msec 01:08:59.621 WRITE: bw=63.4MiB/s (66.5MB/s), 11.9MiB/s-22.0MiB/s (12.5MB/s-23.0MB/s), io=63.9MiB (67.0MB), run=1002-1007msec 01:08:59.621 01:08:59.621 Disk stats (read/write): 01:08:59.621 nvme0n1: ios=3105/3143, merge=0/0, ticks=36147/52464, in_queue=88611, util=100.00% 01:08:59.621 nvme0n2: ios=2708/3072, merge=0/0, ticks=24266/31367, in_queue=55633, util=97.34% 01:08:59.621 nvme0n3: ios=4320/4608, merge=0/0, ticks=23869/21540, in_queue=45409, util=96.08% 01:08:59.621 nvme0n4: ios=2560/2575, merge=0/0, ticks=40731/61131, in_queue=101862, util=89.51% 01:08:59.621 11:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 01:08:59.621 [global] 01:08:59.621 thread=1 01:08:59.621 invalidate=1 01:08:59.621 rw=randwrite 01:08:59.621 time_based=1 01:08:59.621 runtime=1 01:08:59.621 ioengine=libaio 01:08:59.621 direct=1 01:08:59.621 bs=4096 01:08:59.621 iodepth=128 01:08:59.621 norandommap=0 01:08:59.621 numjobs=1 01:08:59.621 01:08:59.621 verify_dump=1 01:08:59.621 verify_backlog=512 01:08:59.621 verify_state_save=0 01:08:59.621 do_verify=1 01:08:59.621 verify=crc32c-intel 01:08:59.621 [job0] 01:08:59.621 filename=/dev/nvme0n1 01:08:59.621 [job1] 01:08:59.621 filename=/dev/nvme0n2 01:08:59.621 [job2] 01:08:59.621 filename=/dev/nvme0n3 01:08:59.621 [job3] 01:08:59.621 filename=/dev/nvme0n4 01:08:59.621 Could not set queue depth (nvme0n1) 01:08:59.621 Could not set queue depth (nvme0n2) 01:08:59.621 Could not set queue depth (nvme0n3) 01:08:59.621 Could not set queue depth (nvme0n4) 01:08:59.879 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:59.879 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:59.879 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:59.879 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:59.879 fio-3.35 01:08:59.879 Starting 4 threads 01:09:01.252 01:09:01.252 job0: (groupid=0, jobs=1): err= 0: pid=2619937: Mon Dec 9 11:20:02 2024 01:09:01.252 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 01:09:01.252 slat (usec): min=5, max=29269, avg=178.73, stdev=1426.42 01:09:01.252 clat (usec): min=6649, max=82903, avg=23448.42, stdev=14004.98 01:09:01.252 lat (usec): min=6671, max=82931, avg=23627.15, stdev=14143.32 01:09:01.252 clat percentiles (usec): 01:09:01.252 | 1.00th=[11076], 5.00th=[14615], 10.00th=[14877], 20.00th=[15664], 01:09:01.252 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 01:09:01.252 | 70.00th=[17957], 80.00th=[30278], 90.00th=[49546], 95.00th=[55313], 01:09:01.252 | 99.00th=[69731], 99.50th=[69731], 99.90th=[70779], 99.95th=[77071], 01:09:01.252 | 99.99th=[83362] 01:09:01.252 write: IOPS=2593, BW=10.1MiB/s (10.6MB/s)(10.3MiB/1012msec); 0 zone resets 01:09:01.252 slat (usec): min=5, max=34325, avg=195.08, stdev=1680.80 01:09:01.252 clat (usec): min=7300, max=81973, avg=25672.20, stdev=14983.04 01:09:01.252 lat (usec): min=9208, max=81988, avg=25867.28, stdev=15149.26 01:09:01.252 clat percentiles (usec): 01:09:01.252 | 1.00th=[10814], 5.00th=[12780], 10.00th=[12911], 20.00th=[15926], 01:09:01.252 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16909], 60.00th=[18220], 01:09:01.252 | 70.00th=[31065], 80.00th=[40633], 90.00th=[47449], 95.00th=[55313], 01:09:01.252 | 99.00th=[82314], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 01:09:01.252 | 99.99th=[82314] 01:09:01.252 bw ( KiB/s): min= 7552, max=12928, per=15.73%, avg=10240.00, stdev=3801.41, samples=2 01:09:01.252 iops : min= 1888, max= 3232, avg=2560.00, stdev=950.35, samples=2 01:09:01.252 lat (msec) : 10=0.35%, 20=67.98%, 50=23.14%, 100=8.52% 01:09:01.252 cpu : usr=3.56%, sys=5.84%, ctx=113, majf=0, minf=1 01:09:01.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 01:09:01.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:01.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:09:01.252 issued rwts: total=2560,2625,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:01.252 latency : target=0, window=0, percentile=100.00%, depth=128 01:09:01.252 job1: (groupid=0, jobs=1): err= 0: pid=2619946: Mon Dec 9 11:20:02 2024 01:09:01.252 read: IOPS=5736, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1004msec) 01:09:01.252 slat (usec): min=3, max=4927, avg=75.14, stdev=459.36 01:09:01.252 clat (usec): min=3112, max=15908, avg=9962.04, stdev=1477.45 01:09:01.252 lat (usec): min=3230, max=17260, avg=10037.18, stdev=1517.62 01:09:01.252 clat percentiles (usec): 01:09:01.252 | 1.00th=[ 6587], 5.00th=[ 7635], 10.00th=[ 8160], 20.00th=[ 8979], 01:09:01.252 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 01:09:01.252 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11731], 95.00th=[12649], 01:09:01.252 | 99.00th=[13829], 99.50th=[14091], 99.90th=[15795], 99.95th=[15926], 01:09:01.252 | 99.99th=[15926] 01:09:01.252 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 01:09:01.252 slat (usec): min=4, max=7469, avg=81.58, stdev=408.08 01:09:01.252 clat (usec): min=4912, max=34823, avg=11349.04, stdev=4903.82 01:09:01.252 lat (usec): min=4947, max=34838, avg=11430.62, stdev=4943.84 01:09:01.252 clat percentiles (usec): 01:09:01.252 | 1.00th=[ 6259], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9634], 01:09:01.252 | 30.00th=[ 9765], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 01:09:01.252 | 70.00th=[10290], 80.00th=[10552], 90.00th=[13566], 95.00th=[26084], 01:09:01.252 | 99.00th=[31851], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 01:09:01.252 | 99.99th=[34866] 01:09:01.252 bw ( KiB/s): min=22840, max=26312, per=37.75%, avg=24576.00, stdev=2455.07, samples=2 01:09:01.252 iops : min= 5710, max= 6578, avg=6144.00, stdev=613.77, samples=2 01:09:01.252 lat (msec) : 4=0.38%, 10=53.15%, 20=42.73%, 50=3.75% 01:09:01.252 cpu : usr=9.97%, sys=9.17%, ctx=572, majf=0, minf=1 01:09:01.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 01:09:01.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:01.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:09:01.252 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:01.252 latency : target=0, window=0, percentile=100.00%, depth=128 01:09:01.252 job2: (groupid=0, jobs=1): err= 0: pid=2619967: Mon Dec 9 11:20:02 2024 01:09:01.252 read: IOPS=2495, BW=9982KiB/s (10.2MB/s)(9.84MiB/1009msec) 01:09:01.252 slat (usec): min=3, max=20469, avg=177.38, stdev=1275.59 01:09:01.252 clat (usec): min=4401, max=78020, avg=21736.80, stdev=8682.70 01:09:01.252 lat (usec): min=9758, max=78027, avg=21914.17, stdev=8790.98 01:09:01.252 clat percentiles (usec): 01:09:01.252 | 1.00th=[10159], 5.00th=[12387], 10.00th=[14353], 20.00th=[17171], 01:09:01.252 | 30.00th=[18220], 40.00th=[19268], 50.00th=[20055], 60.00th=[21365], 01:09:01.252 | 70.00th=[22414], 80.00th=[23725], 90.00th=[28705], 95.00th=[31851], 01:09:01.252 | 99.00th=[67634], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 01:09:01.252 | 99.99th=[78119] 01:09:01.252 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 01:09:01.252 slat (usec): min=4, max=17478, avg=205.75, stdev=1219.25 01:09:01.252 clat (usec): min=1915, max=78021, avg=28611.07, stdev=19363.09 01:09:01.252 lat (usec): min=1934, max=78034, avg=28816.82, stdev=19501.11 01:09:01.252 clat percentiles (usec): 01:09:01.252 | 1.00th=[10683], 5.00th=[12387], 10.00th=[14091], 20.00th=[15401], 01:09:01.252 | 30.00th=[16909], 40.00th=[17695], 50.00th=[20055], 60.00th=[20841], 01:09:01.252 | 70.00th=[26346], 80.00th=[50070], 90.00th=[66323], 95.00th=[67634], 01:09:01.252 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[78119], 01:09:01.252 | 99.99th=[78119] 01:09:01.252 bw ( KiB/s): min=10000, max=10480, per=15.73%, avg=10240.00, stdev=339.41, samples=2 01:09:01.252 iops : min= 2500, max= 2620, avg=2560.00, stdev=84.85, samples=2 01:09:01.252 lat (msec) : 2=0.12%, 10=0.39%, 20=48.09%, 50=40.06%, 100=11.34% 01:09:01.252 cpu : usr=3.87%, sys=5.26%, ctx=198, majf=0, minf=1 01:09:01.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 01:09:01.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:01.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:09:01.252 issued rwts: total=2518,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:01.252 latency : target=0, window=0, percentile=100.00%, depth=128 01:09:01.252 job3: (groupid=0, jobs=1): err= 0: pid=2619975: Mon Dec 9 11:20:02 2024 01:09:01.252 read: IOPS=4953, BW=19.4MiB/s (20.3MB/s)(20.2MiB/1042msec) 01:09:01.252 slat (usec): min=2, max=5414, avg=90.87, stdev=502.83 01:09:01.252 clat (usec): min=7967, max=47900, avg=12012.50, stdev=3331.29 01:09:01.252 lat (usec): min=7970, max=47906, avg=12103.37, stdev=3356.21 01:09:01.252 clat percentiles (usec): 01:09:01.252 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10683], 01:09:01.252 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 01:09:01.252 | 70.00th=[12125], 80.00th=[12649], 90.00th=[13304], 95.00th=[14091], 01:09:01.252 | 99.00th=[16581], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 01:09:01.252 | 99.99th=[47973] 01:09:01.252 write: IOPS=5404, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1042msec); 0 zone resets 01:09:01.252 slat (usec): min=3, max=8586, avg=86.88, stdev=492.52 01:09:01.252 clat (usec): min=6062, max=55390, avg=12356.12, stdev=5065.16 01:09:01.252 lat (usec): min=6191, max=55396, avg=12443.00, stdev=5080.84 01:09:01.252 clat percentiles (usec): 01:09:01.252 | 1.00th=[ 8356], 5.00th=[10421], 10.00th=[10683], 20.00th=[10945], 01:09:01.252 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11731], 01:09:01.252 | 70.00th=[11863], 80.00th=[12125], 90.00th=[13698], 95.00th=[14877], 01:09:01.252 | 99.00th=[50070], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 01:09:01.252 | 99.99th=[55313] 01:09:01.252 bw ( KiB/s): min=21960, max=22416, per=34.08%, avg=22188.00, stdev=322.44, samples=2 01:09:01.252 iops : min= 5490, max= 5604, avg=5547.00, stdev=80.61, samples=2 01:09:01.252 lat (msec) : 10=6.56%, 20=92.25%, 50=0.62%, 100=0.57% 01:09:01.252 cpu : usr=5.09%, sys=6.34%, ctx=535, majf=0, minf=1 01:09:01.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 01:09:01.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:01.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:09:01.252 issued rwts: total=5162,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:01.252 latency : target=0, window=0, percentile=100.00%, depth=128 01:09:01.252 01:09:01.252 Run status group 0 (all jobs): 01:09:01.252 READ: bw=60.0MiB/s (62.9MB/s), 9982KiB/s-22.4MiB/s (10.2MB/s-23.5MB/s), io=62.5MiB (65.5MB), run=1004-1042msec 01:09:01.252 WRITE: bw=63.6MiB/s (66.7MB/s), 9.91MiB/s-23.9MiB/s (10.4MB/s-25.1MB/s), io=66.3MiB (69.5MB), run=1004-1042msec 01:09:01.252 01:09:01.252 Disk stats (read/write): 01:09:01.252 nvme0n1: ios=2115/2560, merge=0/0, ticks=16815/29141, in_queue=45956, util=97.90% 01:09:01.252 nvme0n2: ios=4608/4876, merge=0/0, ticks=21948/26551, in_queue=48499, util=83.68% 01:09:01.252 nvme0n3: ios=1752/2048, merge=0/0, ticks=37637/62591, in_queue=100228, util=87.73% 01:09:01.252 nvme0n4: ios=4198/4608, merge=0/0, ticks=19887/20432, in_queue=40319, util=98.25% 01:09:01.252 11:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 01:09:01.252 11:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2620066 01:09:01.252 11:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 01:09:01.252 11:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 01:09:01.252 [global] 01:09:01.252 thread=1 01:09:01.252 invalidate=1 01:09:01.252 rw=read 01:09:01.252 time_based=1 01:09:01.252 runtime=10 01:09:01.252 ioengine=libaio 01:09:01.252 direct=1 01:09:01.252 bs=4096 01:09:01.252 iodepth=1 01:09:01.252 norandommap=1 01:09:01.252 numjobs=1 01:09:01.252 01:09:01.252 [job0] 01:09:01.252 filename=/dev/nvme0n1 01:09:01.252 [job1] 01:09:01.252 filename=/dev/nvme0n2 01:09:01.252 [job2] 01:09:01.252 filename=/dev/nvme0n3 01:09:01.252 [job3] 01:09:01.252 filename=/dev/nvme0n4 01:09:01.252 Could not set queue depth (nvme0n1) 01:09:01.252 Could not set queue depth (nvme0n2) 01:09:01.252 Could not set queue depth (nvme0n3) 01:09:01.252 Could not set queue depth (nvme0n4) 01:09:01.510 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:09:01.510 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:09:01.510 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:09:01.510 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:09:01.510 fio-3.35 01:09:01.510 Starting 4 threads 01:09:04.048 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 01:09:04.305 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 01:09:04.305 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=28663808, buflen=4096 01:09:04.305 fio: pid=2620320, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:09:04.561 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45236224, buflen=4096 01:09:04.562 fio: pid=2620319, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:09:04.562 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:09:04.562 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 01:09:04.819 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=331776, buflen=4096 01:09:04.819 fio: pid=2620317, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:09:04.819 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:09:04.819 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 01:09:05.383 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51310592, buflen=4096 01:09:05.383 fio: pid=2620318, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:09:05.383 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:09:05.383 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 01:09:05.383 01:09:05.383 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2620317: Mon Dec 9 11:20:06 2024 01:09:05.383 read: IOPS=25, BW=99.0KiB/s (101kB/s)(324KiB/3273msec) 01:09:05.383 slat (nsec): min=10000, max=72986, avg=26766.54, stdev=7829.73 01:09:05.383 clat (usec): min=379, max=42035, avg=40103.13, stdev=6369.53 01:09:05.383 lat (usec): min=406, max=42060, avg=40129.89, stdev=6369.60 01:09:05.383 clat percentiles (usec): 01:09:05.383 | 1.00th=[ 379], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 01:09:05.383 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:09:05.383 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 01:09:05.383 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 01:09:05.383 | 99.99th=[42206] 01:09:05.383 bw ( KiB/s): min= 96, max= 104, per=0.29%, avg=99.33, stdev= 3.93, samples=6 01:09:05.383 iops : min= 24, max= 26, avg=24.83, stdev= 0.98, samples=6 01:09:05.383 lat (usec) : 500=2.44% 01:09:05.383 lat (msec) : 50=96.34% 01:09:05.383 cpu : usr=0.09%, sys=0.00%, ctx=85, majf=0, minf=1 01:09:05.383 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:09:05.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:05.383 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:05.383 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:05.383 latency : target=0, window=0, percentile=100.00%, depth=1 01:09:05.383 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2620318: Mon Dec 9 11:20:06 2024 01:09:05.383 read: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(48.9MiB/3566msec) 01:09:05.383 slat (usec): min=7, max=24453, avg=17.52, stdev=376.83 01:09:05.383 clat (usec): min=183, max=42293, avg=262.74, stdev=704.59 01:09:05.383 lat (usec): min=192, max=42306, avg=280.26, stdev=800.28 01:09:05.383 clat percentiles (usec): 01:09:05.383 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 223], 01:09:05.383 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 01:09:05.383 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 306], 01:09:05.383 | 99.00th=[ 371], 99.50th=[ 404], 99.90th=[ 529], 99.95th=[ 2114], 01:09:05.383 | 99.99th=[41157] 01:09:05.383 bw ( KiB/s): min=10224, max=15664, per=41.32%, avg=14205.86, stdev=1862.75, samples=7 01:09:05.383 iops : min= 2556, max= 3916, avg=3551.43, stdev=465.67, samples=7 01:09:05.383 lat (usec) : 250=53.50%, 500=46.38%, 750=0.05%, 1000=0.01% 01:09:05.383 lat (msec) : 4=0.02%, 50=0.03% 01:09:05.383 cpu : usr=1.63%, sys=4.04%, ctx=12534, majf=0, minf=1 01:09:05.383 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:09:05.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:05.383 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:05.383 issued rwts: total=12528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:05.383 latency : target=0, window=0, percentile=100.00%, depth=1 01:09:05.383 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2620319: Mon Dec 9 11:20:06 2024 01:09:05.383 read: IOPS=3697, BW=14.4MiB/s (15.1MB/s)(43.1MiB/2987msec) 01:09:05.383 slat (usec): min=8, max=13641, avg=12.33, stdev=146.35 01:09:05.383 clat (usec): min=184, max=3236, avg=254.16, stdev=48.29 01:09:05.383 lat (usec): min=196, max=14014, avg=266.49, stdev=155.51 01:09:05.383 clat percentiles (usec): 01:09:05.383 | 1.00th=[ 206], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 01:09:05.383 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 01:09:05.383 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 01:09:05.383 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 502], 99.95th=[ 519], 01:09:05.383 | 99.99th=[ 652] 01:09:05.383 bw ( KiB/s): min=13192, max=16288, per=43.38%, avg=14913.60, stdev=1256.81, samples=5 01:09:05.383 iops : min= 3298, max= 4072, avg=3728.40, stdev=314.20, samples=5 01:09:05.383 lat (usec) : 250=55.31%, 500=44.55%, 750=0.12% 01:09:05.383 lat (msec) : 4=0.01% 01:09:05.383 cpu : usr=1.84%, sys=4.59%, ctx=11048, majf=0, minf=2 01:09:05.383 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:09:05.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:05.384 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:05.384 issued rwts: total=11045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:05.384 latency : target=0, window=0, percentile=100.00%, depth=1 01:09:05.384 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2620320: Mon Dec 9 11:20:06 2024 01:09:05.384 read: IOPS=2571, BW=10.0MiB/s (10.5MB/s)(27.3MiB/2722msec) 01:09:05.384 slat (nsec): min=3331, max=36362, avg=9344.31, stdev=2122.57 01:09:05.384 clat (usec): min=209, max=41976, avg=374.60, stdev=2079.22 01:09:05.384 lat (usec): min=218, max=42004, avg=383.94, stdev=2079.84 01:09:05.384 clat percentiles (usec): 01:09:05.384 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 249], 01:09:05.384 | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 01:09:05.384 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 318], 01:09:05.384 | 99.00th=[ 379], 99.50th=[ 404], 99.90th=[41157], 99.95th=[41157], 01:09:05.384 | 99.99th=[42206] 01:09:05.384 bw ( KiB/s): min= 5384, max=14776, per=32.53%, avg=11184.00, stdev=4081.14, samples=5 01:09:05.384 iops : min= 1346, max= 3694, avg=2796.00, stdev=1020.29, samples=5 01:09:05.384 lat (usec) : 250=20.57%, 500=79.07%, 750=0.06%, 1000=0.01% 01:09:05.384 lat (msec) : 20=0.01%, 50=0.26% 01:09:05.384 cpu : usr=1.32%, sys=2.72%, ctx=6999, majf=0, minf=2 01:09:05.384 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:09:05.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:05.384 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:05.384 issued rwts: total=6999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:05.384 latency : target=0, window=0, percentile=100.00%, depth=1 01:09:05.384 01:09:05.384 Run status group 0 (all jobs): 01:09:05.384 READ: bw=33.6MiB/s (35.2MB/s), 99.0KiB/s-14.4MiB/s (101kB/s-15.1MB/s), io=120MiB (126MB), run=2722-3566msec 01:09:05.384 01:09:05.384 Disk stats (read/write): 01:09:05.384 nvme0n1: ios=75/0, merge=0/0, ticks=3004/0, in_queue=3004, util=93.19% 01:09:05.384 nvme0n2: ios=12497/0, merge=0/0, ticks=3201/0, in_queue=3201, util=92.37% 01:09:05.384 nvme0n3: ios=10426/0, merge=0/0, ticks=2861/0, in_queue=2861, util=99.38% 01:09:05.384 nvme0n4: ios=6990/0, merge=0/0, ticks=2381/0, in_queue=2381, util=96.33% 01:09:05.384 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:09:05.384 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 01:09:05.641 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:09:05.641 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 01:09:05.897 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:09:05.897 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 01:09:06.154 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:09:06.154 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2620066 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:09:06.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 01:09:06.746 nvmf hotplug test: fio failed as expected 01:09:06.746 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:09:07.018 rmmod nvme_tcp 01:09:07.018 rmmod nvme_fabrics 01:09:07.018 rmmod nvme_keyring 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2617906 ']' 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2617906 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2617906 ']' 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2617906 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:07.018 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2617906 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2617906' 01:09:07.305 killing process with pid 2617906 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2617906 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2617906 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:07.305 11:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:09:09.865 01:09:09.865 real 0m28.882s 01:09:09.865 user 1m21.839s 01:09:09.865 sys 0m15.394s 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:09:09.865 ************************************ 01:09:09.865 END TEST nvmf_fio_target 01:09:09.865 ************************************ 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:09:09.865 ************************************ 01:09:09.865 START TEST nvmf_bdevio 01:09:09.865 ************************************ 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 01:09:09.865 * Looking for test storage... 01:09:09.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 01:09:09.865 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:09:09.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:09.866 --rc genhtml_branch_coverage=1 01:09:09.866 --rc genhtml_function_coverage=1 01:09:09.866 --rc genhtml_legend=1 01:09:09.866 --rc geninfo_all_blocks=1 01:09:09.866 --rc geninfo_unexecuted_blocks=1 01:09:09.866 01:09:09.866 ' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:09:09.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:09.866 --rc genhtml_branch_coverage=1 01:09:09.866 --rc genhtml_function_coverage=1 01:09:09.866 --rc genhtml_legend=1 01:09:09.866 --rc geninfo_all_blocks=1 01:09:09.866 --rc geninfo_unexecuted_blocks=1 01:09:09.866 01:09:09.866 ' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:09:09.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:09.866 --rc genhtml_branch_coverage=1 01:09:09.866 --rc genhtml_function_coverage=1 01:09:09.866 --rc genhtml_legend=1 01:09:09.866 --rc geninfo_all_blocks=1 01:09:09.866 --rc geninfo_unexecuted_blocks=1 01:09:09.866 01:09:09.866 ' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:09:09.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:09.866 --rc genhtml_branch_coverage=1 01:09:09.866 --rc genhtml_function_coverage=1 01:09:09.866 --rc genhtml_legend=1 01:09:09.866 --rc geninfo_all_blocks=1 01:09:09.866 --rc geninfo_unexecuted_blocks=1 01:09:09.866 01:09:09.866 ' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 01:09:09.866 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 01:09:09.867 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:09.867 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:09.867 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:09.867 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:09:09.867 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:09:09.867 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 01:09:09.867 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:09:16.421 Found 0000:af:00.0 (0x8086 - 0x159b) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:09:16.421 Found 0000:af:00.1 (0x8086 - 0x159b) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:09:16.421 Found net devices under 0000:af:00.0: cvl_0_0 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:09:16.421 Found net devices under 0000:af:00.1: cvl_0_1 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:09:16.421 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:09:16.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:16.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 01:09:16.422 01:09:16.422 --- 10.0.0.2 ping statistics --- 01:09:16.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:16.422 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:09:16.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:16.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 01:09:16.422 01:09:16.422 --- 10.0.0.1 ping statistics --- 01:09:16.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:16.422 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:09:16.422 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2624196 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2624196 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2624196 ']' 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:16.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:16.681 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:16.681 [2024-12-09 11:20:17.673053] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:09:16.681 [2024-12-09 11:20:17.674107] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:09:16.681 [2024-12-09 11:20:17.674151] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:16.681 [2024-12-09 11:20:17.760836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:09:16.681 [2024-12-09 11:20:17.809877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:16.681 [2024-12-09 11:20:17.809919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:16.681 [2024-12-09 11:20:17.809930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:16.681 [2024-12-09 11:20:17.809939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:16.681 [2024-12-09 11:20:17.809947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:16.681 [2024-12-09 11:20:17.811463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:09:16.681 [2024-12-09 11:20:17.811569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:09:16.681 [2024-12-09 11:20:17.811611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:09:16.681 [2024-12-09 11:20:17.811612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:09:16.939 [2024-12-09 11:20:17.880809] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:09:16.939 [2024-12-09 11:20:17.881385] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:09:16.939 [2024-12-09 11:20:17.881393] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:09:16.940 [2024-12-09 11:20:17.881583] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:09:16.940 [2024-12-09 11:20:17.881669] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:16.940 [2024-12-09 11:20:17.960233] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:16.940 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:16.940 Malloc0 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:16.940 [2024-12-09 11:20:18.036476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:09:16.940 { 01:09:16.940 "params": { 01:09:16.940 "name": "Nvme$subsystem", 01:09:16.940 "trtype": "$TEST_TRANSPORT", 01:09:16.940 "traddr": "$NVMF_FIRST_TARGET_IP", 01:09:16.940 "adrfam": "ipv4", 01:09:16.940 "trsvcid": "$NVMF_PORT", 01:09:16.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:09:16.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:09:16.940 "hdgst": ${hdgst:-false}, 01:09:16.940 "ddgst": ${ddgst:-false} 01:09:16.940 }, 01:09:16.940 "method": "bdev_nvme_attach_controller" 01:09:16.940 } 01:09:16.940 EOF 01:09:16.940 )") 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 01:09:16.940 11:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:09:16.940 "params": { 01:09:16.940 "name": "Nvme1", 01:09:16.940 "trtype": "tcp", 01:09:16.940 "traddr": "10.0.0.2", 01:09:16.940 "adrfam": "ipv4", 01:09:16.940 "trsvcid": "4420", 01:09:16.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:09:16.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:09:16.940 "hdgst": false, 01:09:16.940 "ddgst": false 01:09:16.940 }, 01:09:16.940 "method": "bdev_nvme_attach_controller" 01:09:16.940 }' 01:09:16.940 [2024-12-09 11:20:18.100192] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:09:16.940 [2024-12-09 11:20:18.100270] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2624298 ] 01:09:17.198 [2024-12-09 11:20:18.230802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:09:17.198 [2024-12-09 11:20:18.290114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:09:17.198 [2024-12-09 11:20:18.290201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:09:17.198 [2024-12-09 11:20:18.290205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:09:17.456 I/O targets: 01:09:17.456 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:09:17.456 01:09:17.456 01:09:17.456 CUnit - A unit testing framework for C - Version 2.1-3 01:09:17.456 http://cunit.sourceforge.net/ 01:09:17.456 01:09:17.456 01:09:17.456 Suite: bdevio tests on: Nvme1n1 01:09:17.713 Test: blockdev write read block ...passed 01:09:17.713 Test: blockdev write zeroes read block ...passed 01:09:17.713 Test: blockdev write zeroes read no split ...passed 01:09:17.713 Test: blockdev write zeroes read split ...passed 01:09:17.713 Test: blockdev write zeroes read split partial ...passed 01:09:17.713 Test: blockdev reset ...[2024-12-09 11:20:18.734117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:09:17.713 [2024-12-09 11:20:18.734202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ce750 (9): Bad file descriptor 01:09:17.713 [2024-12-09 11:20:18.738152] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 01:09:17.713 passed 01:09:17.713 Test: blockdev write read 8 blocks ...passed 01:09:17.713 Test: blockdev write read size > 128k ...passed 01:09:17.713 Test: blockdev write read invalid size ...passed 01:09:17.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:09:17.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:09:17.713 Test: blockdev write read max offset ...passed 01:09:17.970 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:09:17.970 Test: blockdev writev readv 8 blocks ...passed 01:09:17.970 Test: blockdev writev readv 30 x 1block ...passed 01:09:17.970 Test: blockdev writev readv block ...passed 01:09:17.970 Test: blockdev writev readv size > 128k ...passed 01:09:17.970 Test: blockdev writev readv size > 128k in two iovs ...passed 01:09:17.970 Test: blockdev comparev and writev ...[2024-12-09 11:20:18.990431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:09:17.970 [2024-12-09 11:20:18.990465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:09:17.971 [2024-12-09 11:20:18.990482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:09:17.971 [2024-12-09 11:20:18.990494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:09:17.971 [2024-12-09 11:20:18.990833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:09:17.971 [2024-12-09 11:20:18.990848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:09:17.971 [2024-12-09 11:20:18.990863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:09:17.971 [2024-12-09 11:20:18.990875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:09:17.971 [2024-12-09 11:20:18.991202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:09:17.971 [2024-12-09 11:20:18.991217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:09:17.971 [2024-12-09 11:20:18.991232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:09:17.971 [2024-12-09 11:20:18.991243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:09:17.971 [2024-12-09 11:20:18.991573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:09:17.971 [2024-12-09 11:20:18.991589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:09:17.971 [2024-12-09 11:20:18.991604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:09:17.971 [2024-12-09 11:20:18.991616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:09:17.971 passed 01:09:17.971 Test: blockdev nvme passthru rw ...passed 01:09:17.971 Test: blockdev nvme passthru vendor specific ...[2024-12-09 11:20:19.074038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:09:17.971 [2024-12-09 11:20:19.074059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:09:17.971 [2024-12-09 11:20:19.074183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:09:17.971 [2024-12-09 11:20:19.074197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:09:17.971 [2024-12-09 11:20:19.074324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:09:17.971 [2024-12-09 11:20:19.074338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:09:17.971 [2024-12-09 11:20:19.074456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:09:17.971 [2024-12-09 11:20:19.074470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:09:17.971 passed 01:09:17.971 Test: blockdev nvme admin passthru ...passed 01:09:17.971 Test: blockdev copy ...passed 01:09:17.971 01:09:17.971 Run Summary: Type Total Ran Passed Failed Inactive 01:09:17.971 suites 1 1 n/a 0 0 01:09:17.971 tests 23 23 23 0 0 01:09:17.971 asserts 152 152 152 0 n/a 01:09:17.971 01:09:17.971 Elapsed time = 0.998 seconds 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 01:09:18.229 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:09:18.229 rmmod nvme_tcp 01:09:18.487 rmmod nvme_fabrics 01:09:18.487 rmmod nvme_keyring 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2624196 ']' 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2624196 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2624196 ']' 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2624196 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2624196 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2624196' 01:09:18.487 killing process with pid 2624196 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2624196 01:09:18.487 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2624196 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:18.745 11:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:21.272 11:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:09:21.272 01:09:21.272 real 0m11.267s 01:09:21.272 user 0m10.620s 01:09:21.272 sys 0m6.188s 01:09:21.272 11:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:21.272 11:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:09:21.272 ************************************ 01:09:21.272 END TEST nvmf_bdevio 01:09:21.272 ************************************ 01:09:21.272 11:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:09:21.272 01:09:21.272 real 4m52.441s 01:09:21.272 user 9m25.873s 01:09:21.272 sys 2m16.289s 01:09:21.272 11:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:21.272 11:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:09:21.272 ************************************ 01:09:21.272 END TEST nvmf_target_core_interrupt_mode 01:09:21.272 ************************************ 01:09:21.272 11:20:21 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 01:09:21.272 11:20:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:09:21.272 11:20:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:09:21.272 11:20:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:21.272 ************************************ 01:09:21.272 START TEST nvmf_interrupt 01:09:21.272 ************************************ 01:09:21.272 11:20:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 01:09:21.272 * Looking for test storage... 01:09:21.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 01:09:21.272 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:09:21.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:21.273 --rc genhtml_branch_coverage=1 01:09:21.273 --rc genhtml_function_coverage=1 01:09:21.273 --rc genhtml_legend=1 01:09:21.273 --rc geninfo_all_blocks=1 01:09:21.273 --rc geninfo_unexecuted_blocks=1 01:09:21.273 01:09:21.273 ' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:09:21.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:21.273 --rc genhtml_branch_coverage=1 01:09:21.273 --rc genhtml_function_coverage=1 01:09:21.273 --rc genhtml_legend=1 01:09:21.273 --rc geninfo_all_blocks=1 01:09:21.273 --rc geninfo_unexecuted_blocks=1 01:09:21.273 01:09:21.273 ' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:09:21.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:21.273 --rc genhtml_branch_coverage=1 01:09:21.273 --rc genhtml_function_coverage=1 01:09:21.273 --rc genhtml_legend=1 01:09:21.273 --rc geninfo_all_blocks=1 01:09:21.273 --rc geninfo_unexecuted_blocks=1 01:09:21.273 01:09:21.273 ' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:09:21.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:21.273 --rc genhtml_branch_coverage=1 01:09:21.273 --rc genhtml_function_coverage=1 01:09:21.273 --rc genhtml_legend=1 01:09:21.273 --rc geninfo_all_blocks=1 01:09:21.273 --rc geninfo_unexecuted_blocks=1 01:09:21.273 01:09:21.273 ' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 01:09:21.273 11:20:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:09:27.827 Found 0000:af:00.0 (0x8086 - 0x159b) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:09:27.827 Found 0000:af:00.1 (0x8086 - 0x159b) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:09:27.827 Found net devices under 0000:af:00.0: cvl_0_0 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:09:27.827 Found net devices under 0000:af:00.1: cvl_0_1 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:09:27.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:27.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 01:09:27.827 01:09:27.827 --- 10.0.0.2 ping statistics --- 01:09:27.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:27.827 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:09:27.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:27.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 01:09:27.827 01:09:27.827 --- 10.0.0.1 ping statistics --- 01:09:27.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:27.827 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 01:09:27.827 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2627712 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2627712 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2627712 ']' 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:27.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:27.828 11:20:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:09:27.828 [2024-12-09 11:20:28.993203] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:09:27.828 [2024-12-09 11:20:28.994685] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:09:27.828 [2024-12-09 11:20:28.994736] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:28.085 [2024-12-09 11:20:29.115424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:09:28.085 [2024-12-09 11:20:29.166272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:28.085 [2024-12-09 11:20:29.166320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:28.085 [2024-12-09 11:20:29.166335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:28.085 [2024-12-09 11:20:29.166349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:28.085 [2024-12-09 11:20:29.166361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:28.085 [2024-12-09 11:20:29.167714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:09:28.085 [2024-12-09 11:20:29.167721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:09:28.085 [2024-12-09 11:20:29.247174] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:09:28.085 [2024-12-09 11:20:29.247264] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:09:28.085 [2024-12-09 11:20:29.247460] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:09:28.342 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:28.342 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 01:09:28.342 11:20:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:09:28.342 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 01:09:28.343 5000+0 records in 01:09:28.343 5000+0 records out 01:09:28.343 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0272684 s, 376 MB/s 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:09:28.343 AIO0 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:09:28.343 [2024-12-09 11:20:29.400598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:09:28.343 [2024-12-09 11:20:29.440806] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2627712 0 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2627712 0 idle 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2627712 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2627712 -w 256 01:09:28.343 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2627712 root 20 0 128.2g 46656 33984 S 0.0 0.1 0:00.32 reactor_0' 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2627712 root 20 0 128.2g 46656 33984 S 0.0 0.1 0:00.32 reactor_0 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2627712 1 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2627712 1 idle 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2627712 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2627712 -w 256 01:09:28.601 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2627716 root 20 0 128.2g 46656 33984 S 0.0 0.1 0:00.00 reactor_1' 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2627716 root 20 0 128.2g 46656 33984 S 0.0 0.1 0:00.00 reactor_1 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2627757 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2627712 0 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2627712 0 busy 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2627712 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2627712 -w 256 01:09:28.859 11:20:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:09:28.859 11:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2627712 root 20 0 128.2g 47232 33984 R 26.7 0.1 0:00.36 reactor_0' 01:09:28.859 11:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2627712 root 20 0 128.2g 47232 33984 R 26.7 0.1 0:00.36 reactor_0 01:09:28.859 11:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:09:28.859 11:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:09:29.116 11:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=26.7 01:09:29.116 11:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=26 01:09:29.116 11:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:09:29.116 11:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:09:29.116 11:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 01:09:30.047 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 01:09:30.047 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:09:30.047 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2627712 -w 256 01:09:30.047 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:09:30.047 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2627712 root 20 0 128.2g 47232 33984 R 99.9 0.1 0:02.74 reactor_0' 01:09:30.304 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2627712 root 20 0 128.2g 47232 33984 R 99.9 0.1 0:02.74 reactor_0 01:09:30.304 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2627712 1 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2627712 1 busy 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2627712 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2627712 -w 256 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2627716 root 20 0 128.2g 47232 33984 R 99.9 0.1 0:01.40 reactor_1' 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2627716 root 20 0 128.2g 47232 33984 R 99.9 0.1 0:01.40 reactor_1 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:09:30.305 11:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2627757 01:09:40.262 Initializing NVMe Controllers 01:09:40.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:09:40.262 Controller IO queue size 256, less than required. 01:09:40.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:40.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:09:40.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:09:40.262 Initialization complete. Launching workers. 01:09:40.262 ======================================================== 01:09:40.262 Latency(us) 01:09:40.262 Device Information : IOPS MiB/s Average min max 01:09:40.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16518.60 64.53 15506.08 5106.41 19916.74 01:09:40.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 11716.20 45.77 21866.99 4136.89 31404.52 01:09:40.262 ======================================================== 01:09:40.262 Total : 28234.80 110.29 18145.58 4136.89 31404.52 01:09:40.262 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2627712 0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2627712 0 idle 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2627712 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2627712 -w 256 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2627712 root 20 0 128.2g 47232 33984 S 0.0 0.1 0:20.30 reactor_0' 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2627712 root 20 0 128.2g 47232 33984 S 0.0 0.1 0:20.30 reactor_0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2627712 1 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2627712 1 idle 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2627712 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2627712 -w 256 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2627716 root 20 0 128.2g 47232 33984 S 0.0 0.1 0:09.99 reactor_1' 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2627716 root 20 0 128.2g 47232 33984 S 0.0 0.1 0:09.99 reactor_1 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:09:40.262 11:20:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2627712 0 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2627712 0 idle 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2627712 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:09:41.637 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2627712 -w 256 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2627712 root 20 0 128.2g 66240 33984 S 0.0 0.1 0:20.47 reactor_0' 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2627712 root 20 0 128.2g 66240 33984 S 0.0 0.1 0:20.47 reactor_0 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2627712 1 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2627712 1 idle 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2627712 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:09:41.895 11:20:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2627712 -w 256 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2627716 root 20 0 128.2g 66240 33984 S 0.0 0.1 0:10.03 reactor_1' 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2627716 root 20 0 128.2g 66240 33984 S 0.0 0.1 0:10.03 reactor_1 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:09:42.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 01:09:42.154 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:09:42.154 rmmod nvme_tcp 01:09:42.154 rmmod nvme_fabrics 01:09:42.425 rmmod nvme_keyring 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2627712 ']' 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2627712 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2627712 ']' 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2627712 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2627712 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2627712' 01:09:42.425 killing process with pid 2627712 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2627712 01:09:42.425 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2627712 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:09:42.683 11:20:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:45.215 11:20:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:09:45.215 01:09:45.215 real 0m23.818s 01:09:45.215 user 0m39.677s 01:09:45.215 sys 0m9.726s 01:09:45.215 11:20:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:45.215 11:20:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:09:45.215 ************************************ 01:09:45.215 END TEST nvmf_interrupt 01:09:45.215 ************************************ 01:09:45.215 01:09:45.215 real 30m29.189s 01:09:45.215 user 61m43.952s 01:09:45.215 sys 11m1.259s 01:09:45.215 11:20:45 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:45.215 11:20:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:45.215 ************************************ 01:09:45.215 END TEST nvmf_tcp 01:09:45.215 ************************************ 01:09:45.215 11:20:45 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 01:09:45.215 11:20:45 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:09:45.215 11:20:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:09:45.215 11:20:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:09:45.215 11:20:45 -- common/autotest_common.sh@10 -- # set +x 01:09:45.215 ************************************ 01:09:45.215 START TEST spdkcli_nvmf_tcp 01:09:45.215 ************************************ 01:09:45.215 11:20:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:09:45.215 * Looking for test storage... 01:09:45.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 01:09:45.215 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:09:45.215 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 01:09:45.215 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:09:45.215 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:09:45.215 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:09:45.215 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:09:45.215 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:09:45.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:45.216 --rc genhtml_branch_coverage=1 01:09:45.216 --rc genhtml_function_coverage=1 01:09:45.216 --rc genhtml_legend=1 01:09:45.216 --rc geninfo_all_blocks=1 01:09:45.216 --rc geninfo_unexecuted_blocks=1 01:09:45.216 01:09:45.216 ' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:09:45.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:45.216 --rc genhtml_branch_coverage=1 01:09:45.216 --rc genhtml_function_coverage=1 01:09:45.216 --rc genhtml_legend=1 01:09:45.216 --rc geninfo_all_blocks=1 01:09:45.216 --rc geninfo_unexecuted_blocks=1 01:09:45.216 01:09:45.216 ' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:09:45.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:45.216 --rc genhtml_branch_coverage=1 01:09:45.216 --rc genhtml_function_coverage=1 01:09:45.216 --rc genhtml_legend=1 01:09:45.216 --rc geninfo_all_blocks=1 01:09:45.216 --rc geninfo_unexecuted_blocks=1 01:09:45.216 01:09:45.216 ' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:09:45.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:45.216 --rc genhtml_branch_coverage=1 01:09:45.216 --rc genhtml_function_coverage=1 01:09:45.216 --rc genhtml_legend=1 01:09:45.216 --rc geninfo_all_blocks=1 01:09:45.216 --rc geninfo_unexecuted_blocks=1 01:09:45.216 01:09:45.216 ' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:09:45.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2630037 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2630037 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2630037 ']' 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 01:09:45.216 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:45.217 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:45.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:45.217 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:45.217 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:45.217 [2024-12-09 11:20:46.256995] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:09:45.217 [2024-12-09 11:20:46.257060] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630037 ] 01:09:45.217 [2024-12-09 11:20:46.367217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:09:45.475 [2024-12-09 11:20:46.425043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:09:45.475 [2024-12-09 11:20:46.425049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:45.475 11:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 01:09:45.475 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 01:09:45.475 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 01:09:45.475 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 01:09:45.475 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 01:09:45.475 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 01:09:45.475 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 01:09:45.475 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:09:45.475 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:09:45.475 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 01:09:45.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 01:09:45.475 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 01:09:45.475 ' 01:09:48.002 [2024-12-09 11:20:49.169613] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:49.372 [2024-12-09 11:20:50.393932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 01:09:51.898 [2024-12-09 11:20:52.641205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 01:09:53.794 [2024-12-09 11:20:54.575546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 01:09:55.168 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 01:09:55.168 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 01:09:55.168 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 01:09:55.168 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 01:09:55.168 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 01:09:55.168 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 01:09:55.168 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 01:09:55.168 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:09:55.168 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:09:55.168 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 01:09:55.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 01:09:55.168 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 01:09:55.168 11:20:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 01:09:55.168 11:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:55.168 11:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:55.168 11:20:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 01:09:55.168 11:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:55.168 11:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:55.168 11:20:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 01:09:55.168 11:20:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 01:09:55.734 11:20:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 01:09:55.734 11:20:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 01:09:55.734 11:20:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 01:09:55.734 11:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:55.734 11:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:55.734 11:20:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 01:09:55.734 11:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:55.734 11:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:55.734 11:20:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 01:09:55.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 01:09:55.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:09:55.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 01:09:55.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 01:09:55.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 01:09:55.734 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 01:09:55.734 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:09:55.734 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 01:09:55.734 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 01:09:55.734 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 01:09:55.734 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 01:09:55.734 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 01:09:55.734 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 01:09:55.734 ' 01:10:00.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 01:10:00.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 01:10:00.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 01:10:00.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 01:10:00.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 01:10:00.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 01:10:00.993 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 01:10:00.993 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 01:10:00.993 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 01:10:00.993 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 01:10:00.994 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 01:10:00.994 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 01:10:00.994 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 01:10:00.994 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2630037 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2630037 ']' 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2630037 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630037 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630037' 01:10:00.994 killing process with pid 2630037 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2630037 01:10:00.994 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2630037 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2630037 ']' 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2630037 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2630037 ']' 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2630037 01:10:01.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2630037) - No such process 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2630037 is not found' 01:10:01.250 Process with pid 2630037 is not found 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 01:10:01.250 01:10:01.250 real 0m16.440s 01:10:01.250 user 0m34.206s 01:10:01.250 sys 0m0.923s 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:01.250 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:10:01.250 ************************************ 01:10:01.250 END TEST spdkcli_nvmf_tcp 01:10:01.250 ************************************ 01:10:01.508 11:21:02 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:10:01.508 11:21:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:10:01.508 11:21:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:01.508 11:21:02 -- common/autotest_common.sh@10 -- # set +x 01:10:01.508 ************************************ 01:10:01.508 START TEST nvmf_identify_passthru 01:10:01.508 ************************************ 01:10:01.508 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:10:01.508 * Looking for test storage... 01:10:01.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:10:01.508 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:10:01.508 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 01:10:01.508 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:10:01.508 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:10:01.508 11:21:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 01:10:01.766 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:10:01.766 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:10:01.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:01.766 --rc genhtml_branch_coverage=1 01:10:01.766 --rc genhtml_function_coverage=1 01:10:01.766 --rc genhtml_legend=1 01:10:01.766 --rc geninfo_all_blocks=1 01:10:01.766 --rc geninfo_unexecuted_blocks=1 01:10:01.766 01:10:01.766 ' 01:10:01.766 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:10:01.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:01.766 --rc genhtml_branch_coverage=1 01:10:01.766 --rc genhtml_function_coverage=1 01:10:01.766 --rc genhtml_legend=1 01:10:01.766 --rc geninfo_all_blocks=1 01:10:01.766 --rc geninfo_unexecuted_blocks=1 01:10:01.766 01:10:01.766 ' 01:10:01.766 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:10:01.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:01.766 --rc genhtml_branch_coverage=1 01:10:01.766 --rc genhtml_function_coverage=1 01:10:01.766 --rc genhtml_legend=1 01:10:01.766 --rc geninfo_all_blocks=1 01:10:01.766 --rc geninfo_unexecuted_blocks=1 01:10:01.766 01:10:01.766 ' 01:10:01.766 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:10:01.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:01.766 --rc genhtml_branch_coverage=1 01:10:01.766 --rc genhtml_function_coverage=1 01:10:01.766 --rc genhtml_legend=1 01:10:01.766 --rc geninfo_all_blocks=1 01:10:01.766 --rc geninfo_unexecuted_blocks=1 01:10:01.766 01:10:01.766 ' 01:10:01.766 11:21:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:01.766 11:21:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.766 11:21:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.766 11:21:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.766 11:21:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:10:01.766 11:21:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:10:01.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 01:10:01.766 11:21:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:01.766 11:21:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:01.766 11:21:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.766 11:21:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.766 11:21:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.766 11:21:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:10:01.766 11:21:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.766 11:21:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:01.766 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:10:01.766 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:10:01.766 11:21:02 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 01:10:01.766 11:21:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:10:08.316 Found 0000:af:00.0 (0x8086 - 0x159b) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:10:08.316 Found 0000:af:00.1 (0x8086 - 0x159b) 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:10:08.316 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:10:08.317 Found net devices under 0000:af:00.0: cvl_0_0 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:10:08.317 Found net devices under 0000:af:00.1: cvl_0_1 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:10:08.317 11:21:08 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:10:08.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:08.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 01:10:08.317 01:10:08.317 --- 10.0.0.2 ping statistics --- 01:10:08.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:08.317 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:10:08.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:08.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 01:10:08.317 01:10:08.317 --- 10.0.0.1 ping statistics --- 01:10:08.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:08.317 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:10:08.317 11:21:09 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:10:08.317 11:21:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:08.317 11:21:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 01:10:08.317 11:21:09 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 01:10:08.317 11:21:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 01:10:08.317 11:21:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 01:10:08.317 11:21:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 01:10:08.317 11:21:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 01:10:08.317 11:21:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 01:10:14.873 11:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ83030AYC4P0DGN 01:10:14.873 11:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 01:10:14.873 11:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 01:10:14.873 11:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 01:10:21.438 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 01:10:21.438 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 01:10:21.439 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:21.439 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:21.439 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 01:10:21.439 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:21.439 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:21.439 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2637224 01:10:21.439 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:10:21.439 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:10:21.439 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2637224 01:10:21.439 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2637224 ']' 01:10:21.439 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:21.439 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:21.439 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:21.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:21.439 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:21.439 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:21.439 [2024-12-09 11:21:22.495674] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:10:21.439 [2024-12-09 11:21:22.495749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:21.710 [2024-12-09 11:21:22.628364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:10:21.710 [2024-12-09 11:21:22.683518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:21.710 [2024-12-09 11:21:22.683569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:21.710 [2024-12-09 11:21:22.683584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:21.710 [2024-12-09 11:21:22.683598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:21.710 [2024-12-09 11:21:22.683609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:21.710 [2024-12-09 11:21:22.685505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:10:21.710 [2024-12-09 11:21:22.685593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:10:21.710 [2024-12-09 11:21:22.685686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:10:21.710 [2024-12-09 11:21:22.685691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:10:21.710 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:21.710 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 01:10:21.710 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 01:10:21.710 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:21.710 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:21.710 INFO: Log level set to 20 01:10:21.710 INFO: Requests: 01:10:21.710 { 01:10:21.710 "jsonrpc": "2.0", 01:10:21.710 "method": "nvmf_set_config", 01:10:21.710 "id": 1, 01:10:21.710 "params": { 01:10:21.710 "admin_cmd_passthru": { 01:10:21.710 "identify_ctrlr": true 01:10:21.710 } 01:10:21.710 } 01:10:21.710 } 01:10:21.710 01:10:21.710 INFO: response: 01:10:21.710 { 01:10:21.710 "jsonrpc": "2.0", 01:10:21.710 "id": 1, 01:10:21.710 "result": true 01:10:21.710 } 01:10:21.710 01:10:21.710 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:21.710 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 01:10:21.710 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:21.710 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:21.710 INFO: Setting log level to 20 01:10:21.710 INFO: Setting log level to 20 01:10:21.710 INFO: Log level set to 20 01:10:21.710 INFO: Log level set to 20 01:10:21.710 INFO: Requests: 01:10:21.710 { 01:10:21.710 "jsonrpc": "2.0", 01:10:21.710 "method": "framework_start_init", 01:10:21.710 "id": 1 01:10:21.710 } 01:10:21.710 01:10:21.710 INFO: Requests: 01:10:21.710 { 01:10:21.710 "jsonrpc": "2.0", 01:10:21.710 "method": "framework_start_init", 01:10:21.710 "id": 1 01:10:21.710 } 01:10:21.710 01:10:21.710 [2024-12-09 11:21:22.821006] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 01:10:21.710 INFO: response: 01:10:21.710 { 01:10:21.710 "jsonrpc": "2.0", 01:10:21.710 "id": 1, 01:10:21.710 "result": true 01:10:21.710 } 01:10:21.710 01:10:21.710 INFO: response: 01:10:21.710 { 01:10:21.710 "jsonrpc": "2.0", 01:10:21.710 "id": 1, 01:10:21.710 "result": true 01:10:21.710 } 01:10:21.710 01:10:21.710 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:21.711 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:10:21.711 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:21.711 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:21.711 INFO: Setting log level to 40 01:10:21.711 INFO: Setting log level to 40 01:10:21.711 INFO: Setting log level to 40 01:10:21.711 [2024-12-09 11:21:22.834642] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:21.711 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:21.711 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 01:10:21.711 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:21.711 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:21.968 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 01:10:21.968 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:21.968 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:25.243 Nvme0n1 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:25.243 11:21:25 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:25.243 11:21:25 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:25.243 11:21:25 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:25.243 [2024-12-09 11:21:25.787344] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:25.243 11:21:25 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:25.243 [ 01:10:25.243 { 01:10:25.243 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:10:25.243 "subtype": "Discovery", 01:10:25.243 "listen_addresses": [], 01:10:25.243 "allow_any_host": true, 01:10:25.243 "hosts": [] 01:10:25.243 }, 01:10:25.243 { 01:10:25.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:10:25.243 "subtype": "NVMe", 01:10:25.243 "listen_addresses": [ 01:10:25.243 { 01:10:25.243 "trtype": "TCP", 01:10:25.243 "adrfam": "IPv4", 01:10:25.243 "traddr": "10.0.0.2", 01:10:25.243 "trsvcid": "4420" 01:10:25.243 } 01:10:25.243 ], 01:10:25.243 "allow_any_host": true, 01:10:25.243 "hosts": [], 01:10:25.243 "serial_number": "SPDK00000000000001", 01:10:25.243 "model_number": "SPDK bdev Controller", 01:10:25.243 "max_namespaces": 1, 01:10:25.243 "min_cntlid": 1, 01:10:25.243 "max_cntlid": 65519, 01:10:25.243 "namespaces": [ 01:10:25.243 { 01:10:25.243 "nsid": 1, 01:10:25.243 "bdev_name": "Nvme0n1", 01:10:25.243 "name": "Nvme0n1", 01:10:25.243 "nguid": "E317CBF4509B4EB2B84F79FE913368B1", 01:10:25.243 "uuid": "e317cbf4-509b-4eb2-b84f-79fe913368b1" 01:10:25.243 } 01:10:25.243 ] 01:10:25.243 } 01:10:25.243 ] 01:10:25.243 11:21:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:25.243 11:21:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:10:25.243 11:21:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 01:10:25.243 11:21:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 01:10:25.243 11:21:26 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ83030AYC4P0DGN 01:10:25.243 11:21:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 01:10:25.243 11:21:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:10:25.243 11:21:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 01:10:25.501 11:21:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 01:10:25.501 11:21:26 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ83030AYC4P0DGN '!=' BTLJ83030AYC4P0DGN ']' 01:10:25.501 11:21:26 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 01:10:25.501 11:21:26 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:10:25.501 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:25.501 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:25.769 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:25.769 11:21:26 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 01:10:25.769 11:21:26 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:10:25.770 rmmod nvme_tcp 01:10:25.770 rmmod nvme_fabrics 01:10:25.770 rmmod nvme_keyring 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2637224 ']' 01:10:25.770 11:21:26 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2637224 01:10:25.770 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2637224 ']' 01:10:25.770 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2637224 01:10:25.770 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 01:10:25.770 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:25.770 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637224 01:10:25.770 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:10:25.770 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:10:25.770 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637224' 01:10:25.770 killing process with pid 2637224 01:10:25.770 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2637224 01:10:25.770 11:21:26 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2637224 01:10:29.948 11:21:30 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:10:29.948 11:21:30 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:10:29.948 11:21:30 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:10:29.948 11:21:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 01:10:29.948 11:21:30 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:10:29.948 11:21:30 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 01:10:29.948 11:21:30 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 01:10:29.948 11:21:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:10:29.948 11:21:30 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 01:10:29.948 11:21:30 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:29.948 11:21:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:10:29.948 11:21:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:31.848 11:21:32 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:10:31.848 01:10:31.848 real 0m30.277s 01:10:31.848 user 0m42.905s 01:10:31.848 sys 0m7.270s 01:10:31.848 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:31.848 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:10:31.848 ************************************ 01:10:31.848 END TEST nvmf_identify_passthru 01:10:31.848 ************************************ 01:10:31.848 11:21:32 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 01:10:31.848 11:21:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:10:31.848 11:21:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:31.848 11:21:32 -- common/autotest_common.sh@10 -- # set +x 01:10:31.848 ************************************ 01:10:31.848 START TEST nvmf_dif 01:10:31.848 ************************************ 01:10:31.848 11:21:32 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 01:10:31.848 * Looking for test storage... 01:10:31.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:10:31.848 11:21:32 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:10:31.848 11:21:32 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 01:10:31.848 11:21:32 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:10:32.106 11:21:33 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@345 -- # : 1 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@353 -- # local d=1 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@355 -- # echo 1 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@353 -- # local d=2 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@355 -- # echo 2 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:10:32.106 11:21:33 nvmf_dif -- scripts/common.sh@368 -- # return 0 01:10:32.106 11:21:33 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:10:32.106 11:21:33 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:10:32.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:32.106 --rc genhtml_branch_coverage=1 01:10:32.106 --rc genhtml_function_coverage=1 01:10:32.106 --rc genhtml_legend=1 01:10:32.106 --rc geninfo_all_blocks=1 01:10:32.106 --rc geninfo_unexecuted_blocks=1 01:10:32.106 01:10:32.106 ' 01:10:32.106 11:21:33 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:10:32.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:32.106 --rc genhtml_branch_coverage=1 01:10:32.106 --rc genhtml_function_coverage=1 01:10:32.106 --rc genhtml_legend=1 01:10:32.106 --rc geninfo_all_blocks=1 01:10:32.106 --rc geninfo_unexecuted_blocks=1 01:10:32.106 01:10:32.106 ' 01:10:32.106 11:21:33 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:10:32.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:32.106 --rc genhtml_branch_coverage=1 01:10:32.106 --rc genhtml_function_coverage=1 01:10:32.106 --rc genhtml_legend=1 01:10:32.106 --rc geninfo_all_blocks=1 01:10:32.106 --rc geninfo_unexecuted_blocks=1 01:10:32.106 01:10:32.106 ' 01:10:32.106 11:21:33 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:10:32.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:32.106 --rc genhtml_branch_coverage=1 01:10:32.106 --rc genhtml_function_coverage=1 01:10:32.106 --rc genhtml_legend=1 01:10:32.107 --rc geninfo_all_blocks=1 01:10:32.107 --rc geninfo_unexecuted_blocks=1 01:10:32.107 01:10:32.107 ' 01:10:32.107 11:21:33 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:10:32.107 11:21:33 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 01:10:32.107 11:21:33 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:32.107 11:21:33 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:32.107 11:21:33 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:32.107 11:21:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:32.107 11:21:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:32.107 11:21:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:32.107 11:21:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 01:10:32.107 11:21:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@51 -- # : 0 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:10:32.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 01:10:32.107 11:21:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 01:10:32.107 11:21:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 01:10:32.107 11:21:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 01:10:32.107 11:21:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 01:10:32.107 11:21:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:32.107 11:21:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:10:32.107 11:21:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:10:32.107 11:21:33 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 01:10:32.107 11:21:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:10:38.742 11:21:39 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:10:38.742 11:21:39 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 01:10:38.742 11:21:39 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 01:10:38.742 11:21:39 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 01:10:38.742 11:21:39 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:10:38.742 11:21:39 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:10:38.743 Found 0000:af:00.0 (0x8086 - 0x159b) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:10:38.743 Found 0000:af:00.1 (0x8086 - 0x159b) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:10:38.743 Found net devices under 0000:af:00.0: cvl_0_0 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:10:38.743 Found net devices under 0000:af:00.1: cvl_0_1 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:10:38.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:38.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 01:10:38.743 01:10:38.743 --- 10.0.0.2 ping statistics --- 01:10:38.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:38.743 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:10:38.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:38.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 01:10:38.743 01:10:38.743 --- 10.0.0.1 ping statistics --- 01:10:38.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:38.743 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@450 -- # return 0 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:10:38.743 11:21:39 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:10:41.268 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 01:10:41.268 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 01:10:41.268 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 01:10:41.268 11:21:42 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:41.268 11:21:42 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:10:41.268 11:21:42 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:10:41.268 11:21:42 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:41.268 11:21:42 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:10:41.268 11:21:42 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:10:41.528 11:21:42 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 01:10:41.528 11:21:42 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 01:10:41.528 11:21:42 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:10:41.528 11:21:42 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:41.528 11:21:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:10:41.528 11:21:42 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2642456 01:10:41.528 11:21:42 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2642456 01:10:41.528 11:21:42 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:10:41.528 11:21:42 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2642456 ']' 01:10:41.528 11:21:42 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:41.528 11:21:42 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:41.528 11:21:42 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:41.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:41.528 11:21:42 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:41.528 11:21:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:10:41.528 [2024-12-09 11:21:42.526798] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:10:41.528 [2024-12-09 11:21:42.526877] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:41.528 [2024-12-09 11:21:42.659182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:41.789 [2024-12-09 11:21:42.712960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:41.789 [2024-12-09 11:21:42.713005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:41.789 [2024-12-09 11:21:42.713021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:41.789 [2024-12-09 11:21:42.713035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:41.789 [2024-12-09 11:21:42.713047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:41.789 [2024-12-09 11:21:42.713682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:10:41.789 11:21:42 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:41.789 11:21:42 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 01:10:41.789 11:21:42 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:10:41.789 11:21:42 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:41.789 11:21:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:10:41.789 11:21:42 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:41.789 11:21:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 01:10:41.789 11:21:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 01:10:41.789 11:21:42 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:41.789 11:21:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:10:41.789 [2024-12-09 11:21:42.890052] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:41.789 11:21:42 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:41.789 11:21:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 01:10:41.789 11:21:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:10:41.789 11:21:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:41.789 11:21:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:10:41.789 ************************************ 01:10:41.789 START TEST fio_dif_1_default 01:10:41.789 ************************************ 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:10:41.789 bdev_null0 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:41.789 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:10:42.047 [2024-12-09 11:21:42.978453] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:10:42.047 { 01:10:42.047 "params": { 01:10:42.047 "name": "Nvme$subsystem", 01:10:42.047 "trtype": "$TEST_TRANSPORT", 01:10:42.047 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:42.047 "adrfam": "ipv4", 01:10:42.047 "trsvcid": "$NVMF_PORT", 01:10:42.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:42.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:42.047 "hdgst": ${hdgst:-false}, 01:10:42.047 "ddgst": ${ddgst:-false} 01:10:42.047 }, 01:10:42.047 "method": "bdev_nvme_attach_controller" 01:10:42.047 } 01:10:42.047 EOF 01:10:42.047 )") 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 01:10:42.047 11:21:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:10:42.047 "params": { 01:10:42.047 "name": "Nvme0", 01:10:42.047 "trtype": "tcp", 01:10:42.047 "traddr": "10.0.0.2", 01:10:42.047 "adrfam": "ipv4", 01:10:42.047 "trsvcid": "4420", 01:10:42.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:42.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:10:42.047 "hdgst": false, 01:10:42.047 "ddgst": false 01:10:42.047 }, 01:10:42.047 "method": "bdev_nvme_attach_controller" 01:10:42.047 }' 01:10:42.047 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:10:42.047 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:10:42.047 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:10:42.047 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:10:42.047 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:10:42.047 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:10:42.047 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:10:42.047 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:10:42.047 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:10:42.047 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:42.305 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:10:42.305 fio-3.35 01:10:42.305 Starting 1 thread 01:10:54.500 01:10:54.500 filename0: (groupid=0, jobs=1): err= 0: pid=2642751: Mon Dec 9 11:21:54 2024 01:10:54.500 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 01:10:54.500 slat (nsec): min=8793, max=28726, avg=9111.87, stdev=740.54 01:10:54.500 clat (usec): min=40811, max=43078, avg=41016.47, stdev=226.84 01:10:54.500 lat (usec): min=40820, max=43107, avg=41025.58, stdev=227.06 01:10:54.500 clat percentiles (usec): 01:10:54.500 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 01:10:54.500 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:10:54.500 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 01:10:54.500 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 01:10:54.500 | 99.99th=[43254] 01:10:54.500 bw ( KiB/s): min= 383, max= 416, per=99.53%, avg=388.75, stdev=11.75, samples=20 01:10:54.500 iops : min= 95, max= 104, avg=97.15, stdev= 2.96, samples=20 01:10:54.500 lat (msec) : 50=100.00% 01:10:54.500 cpu : usr=86.77%, sys=12.91%, ctx=13, majf=0, minf=0 01:10:54.500 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:54.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:54.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:54.500 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:54.500 latency : target=0, window=0, percentile=100.00%, depth=4 01:10:54.500 01:10:54.500 Run status group 0 (all jobs): 01:10:54.500 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10015-10015msec 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:10:54.500 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:54.501 01:10:54.501 real 0m11.439s 01:10:54.501 user 0m12.809s 01:10:54.501 sys 0m1.682s 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 ************************************ 01:10:54.501 END TEST fio_dif_1_default 01:10:54.501 ************************************ 01:10:54.501 11:21:54 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 01:10:54.501 11:21:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:10:54.501 11:21:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 ************************************ 01:10:54.501 START TEST fio_dif_1_multi_subsystems 01:10:54.501 ************************************ 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 bdev_null0 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 [2024-12-09 11:21:54.503548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 bdev_null1 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:10:54.501 { 01:10:54.501 "params": { 01:10:54.501 "name": "Nvme$subsystem", 01:10:54.501 "trtype": "$TEST_TRANSPORT", 01:10:54.501 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:54.501 "adrfam": "ipv4", 01:10:54.501 "trsvcid": "$NVMF_PORT", 01:10:54.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:54.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:54.501 "hdgst": ${hdgst:-false}, 01:10:54.501 "ddgst": ${ddgst:-false} 01:10:54.501 }, 01:10:54.501 "method": "bdev_nvme_attach_controller" 01:10:54.501 } 01:10:54.501 EOF 01:10:54.501 )") 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:10:54.501 { 01:10:54.501 "params": { 01:10:54.501 "name": "Nvme$subsystem", 01:10:54.501 "trtype": "$TEST_TRANSPORT", 01:10:54.501 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:54.501 "adrfam": "ipv4", 01:10:54.501 "trsvcid": "$NVMF_PORT", 01:10:54.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:54.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:54.501 "hdgst": ${hdgst:-false}, 01:10:54.501 "ddgst": ${ddgst:-false} 01:10:54.501 }, 01:10:54.501 "method": "bdev_nvme_attach_controller" 01:10:54.501 } 01:10:54.501 EOF 01:10:54.501 )") 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 01:10:54.501 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:10:54.501 "params": { 01:10:54.501 "name": "Nvme0", 01:10:54.501 "trtype": "tcp", 01:10:54.501 "traddr": "10.0.0.2", 01:10:54.501 "adrfam": "ipv4", 01:10:54.501 "trsvcid": "4420", 01:10:54.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:54.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:10:54.501 "hdgst": false, 01:10:54.501 "ddgst": false 01:10:54.501 }, 01:10:54.501 "method": "bdev_nvme_attach_controller" 01:10:54.501 },{ 01:10:54.501 "params": { 01:10:54.501 "name": "Nvme1", 01:10:54.501 "trtype": "tcp", 01:10:54.501 "traddr": "10.0.0.2", 01:10:54.501 "adrfam": "ipv4", 01:10:54.501 "trsvcid": "4420", 01:10:54.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:10:54.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:10:54.501 "hdgst": false, 01:10:54.501 "ddgst": false 01:10:54.501 }, 01:10:54.502 "method": "bdev_nvme_attach_controller" 01:10:54.502 }' 01:10:54.502 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:10:54.502 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:10:54.502 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:10:54.502 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:10:54.502 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:10:54.502 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:10:54.502 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:10:54.502 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:10:54.502 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:10:54.502 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:10:54.502 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:10:54.502 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:10:54.502 fio-3.35 01:10:54.502 Starting 2 threads 01:11:06.682 01:11:06.682 filename0: (groupid=0, jobs=1): err= 0: pid=2644278: Mon Dec 9 11:22:05 2024 01:11:06.682 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10028msec) 01:11:06.682 slat (nsec): min=8854, max=25536, avg=10508.23, stdev=2549.19 01:11:06.682 clat (usec): min=40836, max=44446, avg=41066.03, stdev=359.09 01:11:06.682 lat (usec): min=40845, max=44472, avg=41076.54, stdev=359.30 01:11:06.682 clat percentiles (usec): 01:11:06.682 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 01:11:06.682 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:11:06.682 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 01:11:06.682 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 01:11:06.682 | 99.99th=[44303] 01:11:06.682 bw ( KiB/s): min= 384, max= 416, per=48.88%, avg=388.80, stdev=11.72, samples=20 01:11:06.682 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 01:11:06.682 lat (msec) : 50=100.00% 01:11:06.682 cpu : usr=93.33%, sys=6.39%, ctx=16, majf=0, minf=28 01:11:06.682 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:06.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.682 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:06.682 latency : target=0, window=0, percentile=100.00%, depth=4 01:11:06.682 filename1: (groupid=0, jobs=1): err= 0: pid=2644279: Mon Dec 9 11:22:05 2024 01:11:06.682 read: IOPS=101, BW=405KiB/s (415kB/s)(4064KiB/10038msec) 01:11:06.682 slat (nsec): min=2939, max=31770, avg=7637.13, stdev=2713.08 01:11:06.682 clat (usec): min=519, max=45476, avg=39495.13, stdev=7885.24 01:11:06.683 lat (usec): min=525, max=45487, avg=39502.76, stdev=7885.23 01:11:06.683 clat percentiles (usec): 01:11:06.683 | 1.00th=[ 537], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 01:11:06.683 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:11:06.683 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 01:11:06.683 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 01:11:06.683 | 99.99th=[45351] 01:11:06.683 bw ( KiB/s): min= 384, max= 448, per=50.90%, avg=404.80, stdev=26.01, samples=20 01:11:06.683 iops : min= 96, max= 112, avg=101.20, stdev= 6.50, samples=20 01:11:06.683 lat (usec) : 750=3.15%, 1000=0.79% 01:11:06.683 lat (msec) : 50=96.06% 01:11:06.683 cpu : usr=94.11%, sys=5.61%, ctx=16, majf=0, minf=77 01:11:06.683 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:06.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.683 issued rwts: total=1016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:06.683 latency : target=0, window=0, percentile=100.00%, depth=4 01:11:06.683 01:11:06.683 Run status group 0 (all jobs): 01:11:06.683 READ: bw=794KiB/s (813kB/s), 389KiB/s-405KiB/s (399kB/s-415kB/s), io=7968KiB (8159kB), run=10028-10038msec 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.683 01:11:06.683 real 0m11.523s 01:11:06.683 user 0m23.009s 01:11:06.683 sys 0m1.598s 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:06.683 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:11:06.683 ************************************ 01:11:06.683 END TEST fio_dif_1_multi_subsystems 01:11:06.683 ************************************ 01:11:06.683 11:22:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 01:11:06.683 11:22:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:11:06.683 11:22:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:11:06.683 11:22:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:11:06.683 ************************************ 01:11:06.683 START TEST fio_dif_rand_params 01:11:06.683 ************************************ 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:06.683 bdev_null0 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:06.683 [2024-12-09 11:22:06.115905] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:06.683 { 01:11:06.683 "params": { 01:11:06.683 "name": "Nvme$subsystem", 01:11:06.683 "trtype": "$TEST_TRANSPORT", 01:11:06.683 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:06.683 "adrfam": "ipv4", 01:11:06.683 "trsvcid": "$NVMF_PORT", 01:11:06.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:06.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:06.683 "hdgst": ${hdgst:-false}, 01:11:06.683 "ddgst": ${ddgst:-false} 01:11:06.683 }, 01:11:06.683 "method": "bdev_nvme_attach_controller" 01:11:06.683 } 01:11:06.683 EOF 01:11:06.683 )") 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:11:06.683 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:11:06.684 "params": { 01:11:06.684 "name": "Nvme0", 01:11:06.684 "trtype": "tcp", 01:11:06.684 "traddr": "10.0.0.2", 01:11:06.684 "adrfam": "ipv4", 01:11:06.684 "trsvcid": "4420", 01:11:06.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:11:06.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:11:06.684 "hdgst": false, 01:11:06.684 "ddgst": false 01:11:06.684 }, 01:11:06.684 "method": "bdev_nvme_attach_controller" 01:11:06.684 }' 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:11:06.684 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:06.684 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:11:06.684 ... 01:11:06.684 fio-3.35 01:11:06.684 Starting 3 threads 01:11:11.943 01:11:11.943 filename0: (groupid=0, jobs=1): err= 0: pid=2645814: Mon Dec 9 11:22:12 2024 01:11:11.943 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(127MiB/5004msec) 01:11:11.943 slat (usec): min=8, max=128, avg=17.82, stdev= 8.02 01:11:11.943 clat (usec): min=4134, max=57422, avg=14707.01, stdev=7793.77 01:11:11.944 lat (usec): min=4149, max=57432, avg=14724.83, stdev=7793.39 01:11:11.944 clat percentiles (usec): 01:11:11.944 | 1.00th=[ 5276], 5.00th=[ 6980], 10.00th=[ 9896], 20.00th=[12125], 01:11:11.944 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13698], 60.00th=[14222], 01:11:11.944 | 70.00th=[14877], 80.00th=[15533], 90.00th=[16319], 95.00th=[17433], 01:11:11.944 | 99.00th=[55313], 99.50th=[56361], 99.90th=[56886], 99.95th=[57410], 01:11:11.944 | 99.99th=[57410] 01:11:11.944 bw ( KiB/s): min=18432, max=28416, per=33.93%, avg=26035.20, stdev=2997.74, samples=10 01:11:11.944 iops : min= 144, max= 222, avg=203.40, stdev=23.42, samples=10 01:11:11.944 lat (msec) : 10=10.30%, 20=85.87%, 50=1.86%, 100=1.96% 01:11:11.944 cpu : usr=94.38%, sys=5.26%, ctx=12, majf=0, minf=130 01:11:11.944 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:11.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:11.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:11.944 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:11.944 latency : target=0, window=0, percentile=100.00%, depth=3 01:11:11.944 filename0: (groupid=0, jobs=1): err= 0: pid=2645815: Mon Dec 9 11:22:12 2024 01:11:11.944 read: IOPS=223, BW=27.9MiB/s (29.2MB/s)(141MiB/5046msec) 01:11:11.944 slat (nsec): min=6854, max=50911, avg=19213.22, stdev=6648.48 01:11:11.944 clat (usec): min=4671, max=53966, avg=13382.50, stdev=7362.23 01:11:11.944 lat (usec): min=4682, max=53989, avg=13401.71, stdev=7361.68 01:11:11.944 clat percentiles (usec): 01:11:11.944 | 1.00th=[ 5080], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[11076], 01:11:11.944 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 01:11:11.944 | 70.00th=[13042], 80.00th=[13435], 90.00th=[14222], 95.00th=[15401], 01:11:11.944 | 99.00th=[51643], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 01:11:11.944 | 99.99th=[53740] 01:11:11.944 bw ( KiB/s): min=13824, max=31744, per=37.50%, avg=28774.40, stdev=5321.13, samples=10 01:11:11.944 iops : min= 108, max= 248, avg=224.80, stdev=41.57, samples=10 01:11:11.944 lat (msec) : 10=10.12%, 20=86.23%, 50=1.87%, 100=1.78% 01:11:11.944 cpu : usr=92.94%, sys=6.01%, ctx=112, majf=0, minf=118 01:11:11.944 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:11.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:11.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:11.944 issued rwts: total=1126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:11.944 latency : target=0, window=0, percentile=100.00%, depth=3 01:11:11.944 filename0: (groupid=0, jobs=1): err= 0: pid=2645816: Mon Dec 9 11:22:12 2024 01:11:11.944 read: IOPS=174, BW=21.8MiB/s (22.9MB/s)(110MiB/5043msec) 01:11:11.944 slat (nsec): min=8919, max=70137, avg=19443.89, stdev=7607.69 01:11:11.944 clat (usec): min=5124, max=56407, avg=17121.45, stdev=7427.58 01:11:11.944 lat (usec): min=5134, max=56434, avg=17140.89, stdev=7426.76 01:11:11.944 clat percentiles (usec): 01:11:11.944 | 1.00th=[ 5276], 5.00th=[ 9634], 10.00th=[11207], 20.00th=[14746], 01:11:11.944 | 30.00th=[15533], 40.00th=[16057], 50.00th=[16450], 60.00th=[16909], 01:11:11.944 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18482], 95.00th=[19530], 01:11:11.944 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 01:11:11.944 | 99.99th=[56361] 01:11:11.944 bw ( KiB/s): min=13824, max=24320, per=29.29%, avg=22476.80, stdev=3078.63, samples=10 01:11:11.944 iops : min= 108, max= 190, avg=175.60, stdev=24.05, samples=10 01:11:11.944 lat (msec) : 10=7.16%, 20=88.30%, 50=2.73%, 100=1.82% 01:11:11.944 cpu : usr=94.11%, sys=5.47%, ctx=28, majf=0, minf=100 01:11:11.944 IO depths : 1=2.2%, 2=97.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:11.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:11.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:11.944 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:11.944 latency : target=0, window=0, percentile=100.00%, depth=3 01:11:11.944 01:11:11.944 Run status group 0 (all jobs): 01:11:11.944 READ: bw=74.9MiB/s (78.6MB/s), 21.8MiB/s-27.9MiB/s (22.9MB/s-29.2MB/s), io=378MiB (396MB), run=5004-5046msec 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 bdev_null0 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 [2024-12-09 11:22:12.620970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 bdev_null1 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 bdev_null2 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.944 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:11.945 { 01:11:11.945 "params": { 01:11:11.945 "name": "Nvme$subsystem", 01:11:11.945 "trtype": "$TEST_TRANSPORT", 01:11:11.945 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:11.945 "adrfam": "ipv4", 01:11:11.945 "trsvcid": "$NVMF_PORT", 01:11:11.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:11.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:11.945 "hdgst": ${hdgst:-false}, 01:11:11.945 "ddgst": ${ddgst:-false} 01:11:11.945 }, 01:11:11.945 "method": "bdev_nvme_attach_controller" 01:11:11.945 } 01:11:11.945 EOF 01:11:11.945 )") 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:11.945 { 01:11:11.945 "params": { 01:11:11.945 "name": "Nvme$subsystem", 01:11:11.945 "trtype": "$TEST_TRANSPORT", 01:11:11.945 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:11.945 "adrfam": "ipv4", 01:11:11.945 "trsvcid": "$NVMF_PORT", 01:11:11.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:11.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:11.945 "hdgst": ${hdgst:-false}, 01:11:11.945 "ddgst": ${ddgst:-false} 01:11:11.945 }, 01:11:11.945 "method": "bdev_nvme_attach_controller" 01:11:11.945 } 01:11:11.945 EOF 01:11:11.945 )") 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:11.945 { 01:11:11.945 "params": { 01:11:11.945 "name": "Nvme$subsystem", 01:11:11.945 "trtype": "$TEST_TRANSPORT", 01:11:11.945 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:11.945 "adrfam": "ipv4", 01:11:11.945 "trsvcid": "$NVMF_PORT", 01:11:11.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:11.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:11.945 "hdgst": ${hdgst:-false}, 01:11:11.945 "ddgst": ${ddgst:-false} 01:11:11.945 }, 01:11:11.945 "method": "bdev_nvme_attach_controller" 01:11:11.945 } 01:11:11.945 EOF 01:11:11.945 )") 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:11:11.945 "params": { 01:11:11.945 "name": "Nvme0", 01:11:11.945 "trtype": "tcp", 01:11:11.945 "traddr": "10.0.0.2", 01:11:11.945 "adrfam": "ipv4", 01:11:11.945 "trsvcid": "4420", 01:11:11.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:11:11.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:11:11.945 "hdgst": false, 01:11:11.945 "ddgst": false 01:11:11.945 }, 01:11:11.945 "method": "bdev_nvme_attach_controller" 01:11:11.945 },{ 01:11:11.945 "params": { 01:11:11.945 "name": "Nvme1", 01:11:11.945 "trtype": "tcp", 01:11:11.945 "traddr": "10.0.0.2", 01:11:11.945 "adrfam": "ipv4", 01:11:11.945 "trsvcid": "4420", 01:11:11.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:11.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:11.945 "hdgst": false, 01:11:11.945 "ddgst": false 01:11:11.945 }, 01:11:11.945 "method": "bdev_nvme_attach_controller" 01:11:11.945 },{ 01:11:11.945 "params": { 01:11:11.945 "name": "Nvme2", 01:11:11.945 "trtype": "tcp", 01:11:11.945 "traddr": "10.0.0.2", 01:11:11.945 "adrfam": "ipv4", 01:11:11.945 "trsvcid": "4420", 01:11:11.945 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:11:11.945 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:11:11.945 "hdgst": false, 01:11:11.945 "ddgst": false 01:11:11.945 }, 01:11:11.945 "method": "bdev_nvme_attach_controller" 01:11:11.945 }' 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:11:11.945 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:11.945 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:11:11.945 ... 01:11:11.945 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:11:11.945 ... 01:11:11.945 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:11:11.945 ... 01:11:11.945 fio-3.35 01:11:11.945 Starting 24 threads 01:11:24.141 01:11:24.141 filename0: (groupid=0, jobs=1): err= 0: pid=2646795: Mon Dec 9 11:22:24 2024 01:11:24.142 read: IOPS=430, BW=1721KiB/s (1762kB/s)(16.8MiB/10006msec) 01:11:24.142 slat (nsec): min=10643, max=46291, avg=20586.07, stdev=5619.79 01:11:24.142 clat (usec): min=13441, max=57178, avg=37017.33, stdev=1915.02 01:11:24.142 lat (usec): min=13457, max=57209, avg=37037.91, stdev=1915.07 01:11:24.142 clat percentiles (usec): 01:11:24.142 | 1.00th=[32375], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.142 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.142 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.142 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38536], 99.95th=[54264], 01:11:24.142 | 99.99th=[57410] 01:11:24.142 bw ( KiB/s): min= 1664, max= 1792, per=4.20%, avg=1717.89, stdev=64.93, samples=19 01:11:24.142 iops : min= 416, max= 448, avg=429.47, stdev=16.23, samples=19 01:11:24.142 lat (msec) : 20=0.42%, 50=99.49%, 100=0.09% 01:11:24.142 cpu : usr=97.32%, sys=2.31%, ctx=13, majf=0, minf=9 01:11:24.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.142 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.142 filename0: (groupid=0, jobs=1): err= 0: pid=2646796: Mon Dec 9 11:22:24 2024 01:11:24.142 read: IOPS=428, BW=1714KiB/s (1755kB/s)(16.8MiB/10006msec) 01:11:24.142 slat (nsec): min=9164, max=99165, avg=15416.70, stdev=5693.16 01:11:24.142 clat (usec): min=18404, max=57741, avg=37191.65, stdev=1813.40 01:11:24.142 lat (usec): min=18430, max=57775, avg=37207.06, stdev=1813.75 01:11:24.142 clat percentiles (usec): 01:11:24.142 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.142 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 01:11:24.142 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.142 | 99.00th=[38011], 99.50th=[39060], 99.90th=[57410], 99.95th=[57410], 01:11:24.142 | 99.99th=[57934] 01:11:24.142 bw ( KiB/s): min= 1539, max= 1792, per=4.17%, avg=1704.58, stdev=74.17, samples=19 01:11:24.142 iops : min= 384, max= 448, avg=426.11, stdev=18.64, samples=19 01:11:24.142 lat (msec) : 20=0.37%, 50=99.25%, 100=0.37% 01:11:24.142 cpu : usr=97.42%, sys=2.23%, ctx=14, majf=0, minf=9 01:11:24.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:11:24.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.142 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.142 filename0: (groupid=0, jobs=1): err= 0: pid=2646797: Mon Dec 9 11:22:24 2024 01:11:24.142 read: IOPS=428, BW=1714KiB/s (1755kB/s)(16.8MiB/10008msec) 01:11:24.142 slat (usec): min=21, max=111, avg=38.33, stdev= 9.46 01:11:24.142 clat (usec): min=26133, max=56459, avg=36997.62, stdev=998.38 01:11:24.142 lat (usec): min=26188, max=56571, avg=37035.95, stdev=999.92 01:11:24.142 clat percentiles (usec): 01:11:24.142 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 01:11:24.142 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.142 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[37487], 01:11:24.142 | 99.00th=[37487], 99.50th=[38536], 99.90th=[45351], 99.95th=[45876], 01:11:24.142 | 99.99th=[56361] 01:11:24.142 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1708.80, stdev=62.64, samples=20 01:11:24.142 iops : min= 416, max= 448, avg=427.20, stdev=15.66, samples=20 01:11:24.142 lat (msec) : 50=99.95%, 100=0.05% 01:11:24.142 cpu : usr=97.17%, sys=2.38%, ctx=12, majf=0, minf=9 01:11:24.142 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.142 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.142 filename0: (groupid=0, jobs=1): err= 0: pid=2646798: Mon Dec 9 11:22:24 2024 01:11:24.142 read: IOPS=428, BW=1713KiB/s (1754kB/s)(16.8MiB/10011msec) 01:11:24.142 slat (usec): min=11, max=120, avg=70.68, stdev= 9.95 01:11:24.142 clat (usec): min=18490, max=69341, avg=36732.42, stdev=2128.89 01:11:24.142 lat (usec): min=18521, max=69374, avg=36803.10, stdev=2128.59 01:11:24.142 clat percentiles (usec): 01:11:24.142 | 1.00th=[35914], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 01:11:24.142 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 01:11:24.142 | 70.00th=[36963], 80.00th=[36963], 90.00th=[36963], 95.00th=[37487], 01:11:24.142 | 99.00th=[37487], 99.50th=[38536], 99.90th=[62653], 99.95th=[63177], 01:11:24.142 | 99.99th=[69731] 01:11:24.142 bw ( KiB/s): min= 1539, max= 1792, per=4.17%, avg=1704.58, stdev=74.17, samples=19 01:11:24.142 iops : min= 384, max= 448, avg=426.11, stdev=18.64, samples=19 01:11:24.142 lat (msec) : 20=0.37%, 50=99.25%, 100=0.37% 01:11:24.142 cpu : usr=97.08%, sys=2.47%, ctx=12, majf=0, minf=9 01:11:24.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.142 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.142 filename0: (groupid=0, jobs=1): err= 0: pid=2646799: Mon Dec 9 11:22:24 2024 01:11:24.142 read: IOPS=428, BW=1713KiB/s (1754kB/s)(16.8MiB/10011msec) 01:11:24.142 slat (nsec): min=6005, max=79897, avg=37923.29, stdev=8819.71 01:11:24.142 clat (usec): min=26084, max=50308, avg=37014.18, stdev=1023.14 01:11:24.142 lat (usec): min=26114, max=50350, avg=37052.10, stdev=1022.21 01:11:24.142 clat percentiles (usec): 01:11:24.142 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 01:11:24.142 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.142 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[37487], 01:11:24.142 | 99.00th=[37487], 99.50th=[38536], 99.90th=[47973], 99.95th=[47973], 01:11:24.142 | 99.99th=[50070] 01:11:24.142 bw ( KiB/s): min= 1654, max= 1792, per=4.18%, avg=1708.30, stdev=63.05, samples=20 01:11:24.142 iops : min= 413, max= 448, avg=427.05, stdev=15.79, samples=20 01:11:24.142 lat (msec) : 50=99.95%, 100=0.05% 01:11:24.142 cpu : usr=96.83%, sys=2.72%, ctx=15, majf=0, minf=9 01:11:24.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.142 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.142 filename0: (groupid=0, jobs=1): err= 0: pid=2646800: Mon Dec 9 11:22:24 2024 01:11:24.142 read: IOPS=428, BW=1714KiB/s (1755kB/s)(16.8MiB/10008msec) 01:11:24.142 slat (usec): min=22, max=106, avg=44.09, stdev=16.13 01:11:24.142 clat (usec): min=18226, max=65203, avg=36950.96, stdev=1929.18 01:11:24.142 lat (usec): min=18259, max=65250, avg=36995.06, stdev=1928.84 01:11:24.142 clat percentiles (usec): 01:11:24.142 | 1.00th=[35914], 5.00th=[36439], 10.00th=[36439], 20.00th=[36963], 01:11:24.142 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.142 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[37487], 01:11:24.142 | 99.00th=[37487], 99.50th=[38536], 99.90th=[58459], 99.95th=[58983], 01:11:24.142 | 99.99th=[65274] 01:11:24.142 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1704.42, stdev=74.55, samples=19 01:11:24.142 iops : min= 384, max= 448, avg=426.11, stdev=18.64, samples=19 01:11:24.142 lat (msec) : 20=0.37%, 50=99.25%, 100=0.37% 01:11:24.142 cpu : usr=96.95%, sys=2.59%, ctx=15, majf=0, minf=9 01:11:24.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.142 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.142 filename0: (groupid=0, jobs=1): err= 0: pid=2646801: Mon Dec 9 11:22:24 2024 01:11:24.142 read: IOPS=428, BW=1713KiB/s (1754kB/s)(16.8MiB/10012msec) 01:11:24.142 slat (usec): min=4, max=118, avg=71.65, stdev=10.11 01:11:24.142 clat (usec): min=17164, max=63977, avg=36724.84, stdev=2168.51 01:11:24.142 lat (usec): min=17222, max=63991, avg=36796.49, stdev=2165.93 01:11:24.142 clat percentiles (usec): 01:11:24.142 | 1.00th=[35914], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 01:11:24.142 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 01:11:24.142 | 70.00th=[36963], 80.00th=[36963], 90.00th=[36963], 95.00th=[37487], 01:11:24.142 | 99.00th=[37487], 99.50th=[38536], 99.90th=[63701], 99.95th=[64226], 01:11:24.142 | 99.99th=[64226] 01:11:24.142 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1704.42, stdev=74.55, samples=19 01:11:24.142 iops : min= 384, max= 448, avg=426.11, stdev=18.64, samples=19 01:11:24.142 lat (msec) : 20=0.37%, 50=99.25%, 100=0.37% 01:11:24.142 cpu : usr=97.12%, sys=2.43%, ctx=15, majf=0, minf=9 01:11:24.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:11:24.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.142 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.142 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.142 filename0: (groupid=0, jobs=1): err= 0: pid=2646802: Mon Dec 9 11:22:24 2024 01:11:24.142 read: IOPS=430, BW=1721KiB/s (1762kB/s)(16.8MiB/10006msec) 01:11:24.142 slat (nsec): min=12503, max=65307, avg=23503.60, stdev=6528.68 01:11:24.142 clat (usec): min=13347, max=55364, avg=36966.86, stdev=1826.75 01:11:24.142 lat (usec): min=13367, max=55385, avg=36990.36, stdev=1826.56 01:11:24.142 clat percentiles (usec): 01:11:24.142 | 1.00th=[32637], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.142 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.142 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.142 | 99.00th=[37487], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 01:11:24.142 | 99.99th=[55313] 01:11:24.142 bw ( KiB/s): min= 1664, max= 1792, per=4.19%, avg=1715.20, stdev=64.34, samples=20 01:11:24.142 iops : min= 416, max= 448, avg=428.80, stdev=16.08, samples=20 01:11:24.142 lat (msec) : 20=0.42%, 50=99.54%, 100=0.05% 01:11:24.142 cpu : usr=96.63%, sys=2.92%, ctx=15, majf=0, minf=9 01:11:24.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.143 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.143 filename1: (groupid=0, jobs=1): err= 0: pid=2646803: Mon Dec 9 11:22:24 2024 01:11:24.143 read: IOPS=430, BW=1721KiB/s (1762kB/s)(16.8MiB/10006msec) 01:11:24.143 slat (nsec): min=9877, max=45365, avg=20372.81, stdev=5459.41 01:11:24.143 clat (usec): min=13357, max=54573, avg=37012.74, stdev=1819.46 01:11:24.143 lat (usec): min=13382, max=54583, avg=37033.11, stdev=1819.30 01:11:24.143 clat percentiles (usec): 01:11:24.143 | 1.00th=[32637], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.143 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.143 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.143 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 01:11:24.143 | 99.99th=[54789] 01:11:24.143 bw ( KiB/s): min= 1664, max= 1792, per=4.19%, avg=1715.20, stdev=64.34, samples=20 01:11:24.143 iops : min= 416, max= 448, avg=428.80, stdev=16.08, samples=20 01:11:24.143 lat (msec) : 20=0.37%, 50=99.58%, 100=0.05% 01:11:24.143 cpu : usr=97.18%, sys=2.45%, ctx=21, majf=0, minf=10 01:11:24.143 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.143 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.143 filename1: (groupid=0, jobs=1): err= 0: pid=2646804: Mon Dec 9 11:22:24 2024 01:11:24.143 read: IOPS=427, BW=1708KiB/s (1749kB/s)(16.7MiB/10004msec) 01:11:24.143 slat (nsec): min=9174, max=92466, avg=18292.79, stdev=17322.64 01:11:24.143 clat (usec): min=20973, max=81614, avg=37293.35, stdev=3001.55 01:11:24.143 lat (usec): min=20983, max=81672, avg=37311.64, stdev=3002.54 01:11:24.143 clat percentiles (usec): 01:11:24.143 | 1.00th=[35914], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.143 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.143 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.143 | 99.00th=[37487], 99.50th=[51119], 99.90th=[81265], 99.95th=[81265], 01:11:24.143 | 99.99th=[81265] 01:11:24.143 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1704.42, stdev=74.55, samples=19 01:11:24.143 iops : min= 384, max= 448, avg=426.11, stdev=18.64, samples=19 01:11:24.143 lat (msec) : 50=99.44%, 100=0.56% 01:11:24.143 cpu : usr=97.05%, sys=2.36%, ctx=109, majf=0, minf=9 01:11:24.143 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 01:11:24.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.143 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.143 filename1: (groupid=0, jobs=1): err= 0: pid=2646805: Mon Dec 9 11:22:24 2024 01:11:24.143 read: IOPS=430, BW=1721KiB/s (1762kB/s)(16.8MiB/10006msec) 01:11:24.143 slat (nsec): min=9609, max=43931, avg=20399.82, stdev=5364.56 01:11:24.143 clat (usec): min=13442, max=55806, avg=37021.26, stdev=1833.11 01:11:24.143 lat (usec): min=13477, max=55823, avg=37041.66, stdev=1832.94 01:11:24.143 clat percentiles (usec): 01:11:24.143 | 1.00th=[32375], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.143 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.143 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.143 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 01:11:24.143 | 99.99th=[55837] 01:11:24.143 bw ( KiB/s): min= 1664, max= 1792, per=4.20%, avg=1717.89, stdev=64.93, samples=19 01:11:24.143 iops : min= 416, max= 448, avg=429.47, stdev=16.23, samples=19 01:11:24.143 lat (msec) : 20=0.42%, 50=99.54%, 100=0.05% 01:11:24.143 cpu : usr=97.22%, sys=2.41%, ctx=15, majf=0, minf=9 01:11:24.143 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.143 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.143 filename1: (groupid=0, jobs=1): err= 0: pid=2646806: Mon Dec 9 11:22:24 2024 01:11:24.143 read: IOPS=428, BW=1713KiB/s (1754kB/s)(16.8MiB/10014msec) 01:11:24.143 slat (nsec): min=9469, max=42465, avg=14082.77, stdev=3585.98 01:11:24.143 clat (usec): min=26416, max=51347, avg=37243.49, stdev=1140.33 01:11:24.143 lat (usec): min=26431, max=51375, avg=37257.57, stdev=1139.69 01:11:24.143 clat percentiles (usec): 01:11:24.143 | 1.00th=[36963], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.143 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 01:11:24.143 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.143 | 99.00th=[38011], 99.50th=[39060], 99.90th=[51119], 99.95th=[51119], 01:11:24.143 | 99.99th=[51119] 01:11:24.143 bw ( KiB/s): min= 1644, max= 1792, per=4.17%, avg=1707.60, stdev=63.70, samples=20 01:11:24.143 iops : min= 411, max= 448, avg=426.90, stdev=15.92, samples=20 01:11:24.143 lat (msec) : 50=99.63%, 100=0.37% 01:11:24.143 cpu : usr=97.35%, sys=2.29%, ctx=8, majf=0, minf=9 01:11:24.143 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:11:24.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.143 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.143 filename1: (groupid=0, jobs=1): err= 0: pid=2646807: Mon Dec 9 11:22:24 2024 01:11:24.143 read: IOPS=428, BW=1715KiB/s (1756kB/s)(16.9MiB/10076msec) 01:11:24.143 slat (nsec): min=3276, max=51409, avg=14137.14, stdev=4764.44 01:11:24.143 clat (usec): min=14068, max=78519, avg=37007.50, stdev=2940.03 01:11:24.143 lat (usec): min=14075, max=78530, avg=37021.64, stdev=2940.11 01:11:24.143 clat percentiles (usec): 01:11:24.143 | 1.00th=[19792], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.143 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 01:11:24.143 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.143 | 99.00th=[37487], 99.50th=[38011], 99.90th=[60556], 99.95th=[60556], 01:11:24.143 | 99.99th=[78119] 01:11:24.143 bw ( KiB/s): min= 1664, max= 1792, per=4.21%, avg=1721.60, stdev=65.33, samples=20 01:11:24.143 iops : min= 416, max= 448, avg=430.40, stdev=16.33, samples=20 01:11:24.143 lat (msec) : 20=1.06%, 50=98.47%, 100=0.46% 01:11:24.143 cpu : usr=97.51%, sys=2.10%, ctx=23, majf=0, minf=9 01:11:24.143 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 01:11:24.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.143 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.143 filename1: (groupid=0, jobs=1): err= 0: pid=2646808: Mon Dec 9 11:22:24 2024 01:11:24.143 read: IOPS=428, BW=1716KiB/s (1757kB/s)(16.8MiB/10035msec) 01:11:24.143 slat (nsec): min=3342, max=36673, avg=13025.70, stdev=4243.14 01:11:24.143 clat (usec): min=11733, max=64061, avg=37192.18, stdev=2318.13 01:11:24.143 lat (usec): min=11740, max=64073, avg=37205.20, stdev=2317.94 01:11:24.143 clat percentiles (usec): 01:11:24.143 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.143 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 01:11:24.143 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.143 | 99.00th=[37487], 99.50th=[38536], 99.90th=[62129], 99.95th=[62129], 01:11:24.143 | 99.99th=[64226] 01:11:24.143 bw ( KiB/s): min= 1664, max= 1792, per=4.19%, avg=1715.35, stdev=64.21, samples=20 01:11:24.143 iops : min= 416, max= 448, avg=428.80, stdev=16.08, samples=20 01:11:24.143 lat (msec) : 20=0.37%, 50=99.26%, 100=0.37% 01:11:24.143 cpu : usr=97.61%, sys=1.93%, ctx=16, majf=0, minf=9 01:11:24.143 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.143 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.143 filename1: (groupid=0, jobs=1): err= 0: pid=2646809: Mon Dec 9 11:22:24 2024 01:11:24.143 read: IOPS=428, BW=1713KiB/s (1754kB/s)(16.8MiB/10013msec) 01:11:24.143 slat (nsec): min=4376, max=87282, avg=18830.71, stdev=8807.65 01:11:24.143 clat (usec): min=18184, max=64005, avg=37193.01, stdev=2105.37 01:11:24.143 lat (usec): min=18208, max=64018, avg=37211.84, stdev=2104.51 01:11:24.143 clat percentiles (usec): 01:11:24.143 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.143 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.143 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.143 | 99.00th=[37487], 99.50th=[38536], 99.90th=[64226], 99.95th=[64226], 01:11:24.143 | 99.99th=[64226] 01:11:24.143 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1704.42, stdev=74.55, samples=19 01:11:24.143 iops : min= 384, max= 448, avg=426.11, stdev=18.64, samples=19 01:11:24.143 lat (msec) : 20=0.37%, 50=99.25%, 100=0.37% 01:11:24.143 cpu : usr=97.65%, sys=2.00%, ctx=15, majf=0, minf=9 01:11:24.143 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:11:24.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.143 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.143 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.143 filename1: (groupid=0, jobs=1): err= 0: pid=2646810: Mon Dec 9 11:22:24 2024 01:11:24.143 read: IOPS=428, BW=1715KiB/s (1756kB/s)(16.8MiB/10001msec) 01:11:24.143 slat (nsec): min=4354, max=82237, avg=18973.81, stdev=6285.88 01:11:24.143 clat (usec): min=26007, max=38516, avg=37142.19, stdev=706.91 01:11:24.143 lat (usec): min=26014, max=38535, avg=37161.17, stdev=707.35 01:11:24.143 clat percentiles (usec): 01:11:24.143 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.143 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.143 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.143 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 01:11:24.144 | 99.99th=[38536] 01:11:24.144 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1711.16, stdev=63.44, samples=19 01:11:24.144 iops : min= 416, max= 448, avg=427.79, stdev=15.86, samples=19 01:11:24.144 lat (msec) : 50=100.00% 01:11:24.144 cpu : usr=97.68%, sys=1.98%, ctx=41, majf=0, minf=9 01:11:24.144 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:11:24.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.144 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.144 filename2: (groupid=0, jobs=1): err= 0: pid=2646811: Mon Dec 9 11:22:24 2024 01:11:24.144 read: IOPS=428, BW=1714KiB/s (1755kB/s)(16.8MiB/10005msec) 01:11:24.144 slat (nsec): min=10320, max=77154, avg=19917.33, stdev=8371.22 01:11:24.144 clat (usec): min=18632, max=63329, avg=37157.24, stdev=1806.45 01:11:24.144 lat (usec): min=18648, max=63376, avg=37177.16, stdev=1807.46 01:11:24.144 clat percentiles (usec): 01:11:24.144 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.144 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.144 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.144 | 99.00th=[37487], 99.50th=[38536], 99.90th=[56361], 99.95th=[56886], 01:11:24.144 | 99.99th=[63177] 01:11:24.144 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1704.42, stdev=74.55, samples=19 01:11:24.144 iops : min= 384, max= 448, avg=426.11, stdev=18.64, samples=19 01:11:24.144 lat (msec) : 20=0.37%, 50=99.25%, 100=0.37% 01:11:24.144 cpu : usr=97.33%, sys=2.31%, ctx=15, majf=0, minf=9 01:11:24.144 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.144 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.144 filename2: (groupid=0, jobs=1): err= 0: pid=2646812: Mon Dec 9 11:22:24 2024 01:11:24.144 read: IOPS=428, BW=1713KiB/s (1754kB/s)(16.8MiB/10011msec) 01:11:24.144 slat (nsec): min=14899, max=80137, avg=37815.20, stdev=8995.46 01:11:24.144 clat (usec): min=18109, max=62304, avg=37012.43, stdev=2036.34 01:11:24.144 lat (usec): min=18128, max=62342, avg=37050.24, stdev=2036.50 01:11:24.144 clat percentiles (usec): 01:11:24.144 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 01:11:24.144 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.144 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[37487], 01:11:24.144 | 99.00th=[37487], 99.50th=[38536], 99.90th=[62129], 99.95th=[62129], 01:11:24.144 | 99.99th=[62129] 01:11:24.144 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1704.42, stdev=74.55, samples=19 01:11:24.144 iops : min= 384, max= 448, avg=426.11, stdev=18.64, samples=19 01:11:24.144 lat (msec) : 20=0.37%, 50=99.25%, 100=0.37% 01:11:24.144 cpu : usr=97.27%, sys=2.28%, ctx=12, majf=0, minf=9 01:11:24.144 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:11:24.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.144 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.144 filename2: (groupid=0, jobs=1): err= 0: pid=2646813: Mon Dec 9 11:22:24 2024 01:11:24.144 read: IOPS=428, BW=1714KiB/s (1755kB/s)(16.8MiB/10006msec) 01:11:24.144 slat (usec): min=9, max=104, avg=23.30, stdev=17.36 01:11:24.144 clat (usec): min=18695, max=57639, avg=37128.18, stdev=1804.36 01:11:24.144 lat (usec): min=18715, max=57672, avg=37151.49, stdev=1804.64 01:11:24.144 clat percentiles (usec): 01:11:24.144 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.144 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.144 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.144 | 99.00th=[37487], 99.50th=[38536], 99.90th=[57410], 99.95th=[57410], 01:11:24.144 | 99.99th=[57410] 01:11:24.144 bw ( KiB/s): min= 1539, max= 1792, per=4.17%, avg=1704.58, stdev=74.17, samples=19 01:11:24.144 iops : min= 384, max= 448, avg=426.11, stdev=18.64, samples=19 01:11:24.144 lat (msec) : 20=0.37%, 50=99.25%, 100=0.37% 01:11:24.144 cpu : usr=97.12%, sys=2.51%, ctx=14, majf=0, minf=9 01:11:24.144 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:11:24.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.144 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.144 filename2: (groupid=0, jobs=1): err= 0: pid=2646814: Mon Dec 9 11:22:24 2024 01:11:24.144 read: IOPS=430, BW=1721KiB/s (1762kB/s)(16.8MiB/10006msec) 01:11:24.144 slat (nsec): min=11033, max=94945, avg=64057.83, stdev=8071.40 01:11:24.144 clat (usec): min=13903, max=45201, avg=36634.27, stdev=1750.22 01:11:24.144 lat (usec): min=13916, max=45264, avg=36698.33, stdev=1752.82 01:11:24.144 clat percentiles (usec): 01:11:24.144 | 1.00th=[32637], 5.00th=[36439], 10.00th=[36439], 20.00th=[36439], 01:11:24.144 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 01:11:24.144 | 70.00th=[36963], 80.00th=[36963], 90.00th=[36963], 95.00th=[37487], 01:11:24.144 | 99.00th=[37487], 99.50th=[37487], 99.90th=[38536], 99.95th=[41157], 01:11:24.144 | 99.99th=[45351] 01:11:24.144 bw ( KiB/s): min= 1664, max= 1792, per=4.19%, avg=1715.20, stdev=64.34, samples=20 01:11:24.144 iops : min= 416, max= 448, avg=428.80, stdev=16.08, samples=20 01:11:24.144 lat (msec) : 20=0.37%, 50=99.63% 01:11:24.144 cpu : usr=97.05%, sys=2.49%, ctx=14, majf=0, minf=9 01:11:24.144 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 01:11:24.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.144 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.144 filename2: (groupid=0, jobs=1): err= 0: pid=2646816: Mon Dec 9 11:22:24 2024 01:11:24.144 read: IOPS=428, BW=1714KiB/s (1755kB/s)(16.8MiB/10008msec) 01:11:24.144 slat (nsec): min=17988, max=78581, avg=37640.61, stdev=8788.14 01:11:24.144 clat (usec): min=26032, max=45150, avg=37014.02, stdev=898.90 01:11:24.144 lat (usec): min=26076, max=45179, avg=37051.66, stdev=898.15 01:11:24.144 clat percentiles (usec): 01:11:24.144 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 01:11:24.144 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.144 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.144 | 99.00th=[37487], 99.50th=[38536], 99.90th=[44827], 99.95th=[45351], 01:11:24.144 | 99.99th=[45351] 01:11:24.144 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1708.80, stdev=62.64, samples=20 01:11:24.144 iops : min= 416, max= 448, avg=427.20, stdev=15.66, samples=20 01:11:24.144 lat (msec) : 50=100.00% 01:11:24.144 cpu : usr=96.77%, sys=2.78%, ctx=15, majf=0, minf=9 01:11:24.144 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:11:24.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.144 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.144 filename2: (groupid=0, jobs=1): err= 0: pid=2646817: Mon Dec 9 11:22:24 2024 01:11:24.144 read: IOPS=426, BW=1708KiB/s (1749kB/s)(16.7MiB/10005msec) 01:11:24.144 slat (usec): min=22, max=113, avg=43.22, stdev=14.84 01:11:24.144 clat (usec): min=18165, max=81749, avg=37110.20, stdev=3198.73 01:11:24.144 lat (usec): min=18201, max=81803, avg=37153.42, stdev=3199.14 01:11:24.144 clat percentiles (usec): 01:11:24.144 | 1.00th=[35914], 5.00th=[36439], 10.00th=[36439], 20.00th=[36963], 01:11:24.144 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.144 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.144 | 99.00th=[38536], 99.50th=[51119], 99.90th=[81265], 99.95th=[81265], 01:11:24.144 | 99.99th=[81265] 01:11:24.144 bw ( KiB/s): min= 1520, max= 1792, per=4.15%, avg=1697.68, stdev=73.69, samples=19 01:11:24.144 iops : min= 380, max= 448, avg=424.42, stdev=18.42, samples=19 01:11:24.144 lat (msec) : 20=0.37%, 50=98.88%, 100=0.75% 01:11:24.144 cpu : usr=96.97%, sys=2.57%, ctx=14, majf=0, minf=9 01:11:24.144 IO depths : 1=5.5%, 2=11.8%, 4=24.9%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 01:11:24.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.144 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.144 filename2: (groupid=0, jobs=1): err= 0: pid=2646818: Mon Dec 9 11:22:24 2024 01:11:24.144 read: IOPS=427, BW=1708KiB/s (1749kB/s)(16.7MiB/10004msec) 01:11:24.144 slat (usec): min=27, max=118, avg=65.93, stdev= 8.27 01:11:24.144 clat (usec): min=19959, max=80963, avg=36877.27, stdev=2891.45 01:11:24.144 lat (usec): min=20019, max=81014, avg=36943.20, stdev=2890.12 01:11:24.144 clat percentiles (usec): 01:11:24.144 | 1.00th=[35914], 5.00th=[36439], 10.00th=[36439], 20.00th=[36439], 01:11:24.144 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 01:11:24.144 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[37487], 01:11:24.144 | 99.00th=[38011], 99.50th=[38536], 99.90th=[81265], 99.95th=[81265], 01:11:24.144 | 99.99th=[81265] 01:11:24.144 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1704.42, stdev=74.55, samples=19 01:11:24.144 iops : min= 384, max= 448, avg=426.11, stdev=18.64, samples=19 01:11:24.144 lat (msec) : 20=0.02%, 50=99.60%, 100=0.37% 01:11:24.144 cpu : usr=96.88%, sys=2.66%, ctx=13, majf=0, minf=9 01:11:24.144 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:11:24.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.144 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.144 filename2: (groupid=0, jobs=1): err= 0: pid=2646819: Mon Dec 9 11:22:24 2024 01:11:24.144 read: IOPS=430, BW=1721KiB/s (1762kB/s)(16.8MiB/10006msec) 01:11:24.144 slat (nsec): min=12722, max=65849, avg=23725.11, stdev=6986.40 01:11:24.144 clat (usec): min=13385, max=40218, avg=36961.70, stdev=1708.45 01:11:24.144 lat (usec): min=13404, max=40258, avg=36985.43, stdev=1707.97 01:11:24.144 clat percentiles (usec): 01:11:24.144 | 1.00th=[32637], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 01:11:24.144 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 01:11:24.144 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 01:11:24.144 | 99.00th=[37487], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 01:11:24.144 | 99.99th=[40109] 01:11:24.144 bw ( KiB/s): min= 1664, max= 1792, per=4.19%, avg=1715.20, stdev=64.34, samples=20 01:11:24.144 iops : min= 416, max= 448, avg=428.80, stdev=16.08, samples=20 01:11:24.144 lat (msec) : 20=0.33%, 50=99.67% 01:11:24.144 cpu : usr=96.84%, sys=2.71%, ctx=14, majf=0, minf=9 01:11:24.144 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 01:11:24.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:24.144 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:24.144 latency : target=0, window=0, percentile=100.00%, depth=16 01:11:24.144 01:11:24.144 Run status group 0 (all jobs): 01:11:24.144 READ: bw=39.9MiB/s (41.9MB/s), 1708KiB/s-1721KiB/s (1749kB/s-1762kB/s), io=402MiB (422MB), run=10001-10076msec 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.144 bdev_null0 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.144 [2024-12-09 11:22:24.609424] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:11:24.144 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.145 bdev_null1 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:24.145 { 01:11:24.145 "params": { 01:11:24.145 "name": "Nvme$subsystem", 01:11:24.145 "trtype": "$TEST_TRANSPORT", 01:11:24.145 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:24.145 "adrfam": "ipv4", 01:11:24.145 "trsvcid": "$NVMF_PORT", 01:11:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:24.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:24.145 "hdgst": ${hdgst:-false}, 01:11:24.145 "ddgst": ${ddgst:-false} 01:11:24.145 }, 01:11:24.145 "method": "bdev_nvme_attach_controller" 01:11:24.145 } 01:11:24.145 EOF 01:11:24.145 )") 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:24.145 { 01:11:24.145 "params": { 01:11:24.145 "name": "Nvme$subsystem", 01:11:24.145 "trtype": "$TEST_TRANSPORT", 01:11:24.145 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:24.145 "adrfam": "ipv4", 01:11:24.145 "trsvcid": "$NVMF_PORT", 01:11:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:24.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:24.145 "hdgst": ${hdgst:-false}, 01:11:24.145 "ddgst": ${ddgst:-false} 01:11:24.145 }, 01:11:24.145 "method": "bdev_nvme_attach_controller" 01:11:24.145 } 01:11:24.145 EOF 01:11:24.145 )") 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:11:24.145 "params": { 01:11:24.145 "name": "Nvme0", 01:11:24.145 "trtype": "tcp", 01:11:24.145 "traddr": "10.0.0.2", 01:11:24.145 "adrfam": "ipv4", 01:11:24.145 "trsvcid": "4420", 01:11:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:11:24.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:11:24.145 "hdgst": false, 01:11:24.145 "ddgst": false 01:11:24.145 }, 01:11:24.145 "method": "bdev_nvme_attach_controller" 01:11:24.145 },{ 01:11:24.145 "params": { 01:11:24.145 "name": "Nvme1", 01:11:24.145 "trtype": "tcp", 01:11:24.145 "traddr": "10.0.0.2", 01:11:24.145 "adrfam": "ipv4", 01:11:24.145 "trsvcid": "4420", 01:11:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:24.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:24.145 "hdgst": false, 01:11:24.145 "ddgst": false 01:11:24.145 }, 01:11:24.145 "method": "bdev_nvme_attach_controller" 01:11:24.145 }' 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:11:24.145 11:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:24.145 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:11:24.145 ... 01:11:24.145 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:11:24.145 ... 01:11:24.145 fio-3.35 01:11:24.145 Starting 4 threads 01:11:30.699 01:11:30.699 filename0: (groupid=0, jobs=1): err= 0: pid=2648454: Mon Dec 9 11:22:30 2024 01:11:30.699 read: IOPS=1895, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5004msec) 01:11:30.699 slat (nsec): min=8895, max=42093, avg=12517.92, stdev=3876.18 01:11:30.699 clat (usec): min=1097, max=7925, avg=4180.98, stdev=633.98 01:11:30.699 lat (usec): min=1110, max=7943, avg=4193.50, stdev=634.20 01:11:30.699 clat percentiles (usec): 01:11:30.699 | 1.00th=[ 2311], 5.00th=[ 3195], 10.00th=[ 3425], 20.00th=[ 3720], 01:11:30.699 | 30.00th=[ 3916], 40.00th=[ 4080], 50.00th=[ 4293], 60.00th=[ 4424], 01:11:30.699 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5080], 01:11:30.699 | 99.00th=[ 5997], 99.50th=[ 6325], 99.90th=[ 7242], 99.95th=[ 7373], 01:11:30.699 | 99.99th=[ 7898] 01:11:30.699 bw ( KiB/s): min=14288, max=15904, per=26.74%, avg=15164.80, stdev=459.40, samples=10 01:11:30.699 iops : min= 1786, max= 1988, avg=1895.60, stdev=57.42, samples=10 01:11:30.700 lat (msec) : 2=0.67%, 4=33.34%, 10=65.98% 01:11:30.700 cpu : usr=93.10%, sys=6.50%, ctx=6, majf=0, minf=9 01:11:30.700 IO depths : 1=0.3%, 2=10.9%, 4=60.2%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:30.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:30.700 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:30.700 issued rwts: total=9486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:30.700 latency : target=0, window=0, percentile=100.00%, depth=8 01:11:30.700 filename0: (groupid=0, jobs=1): err= 0: pid=2648455: Mon Dec 9 11:22:30 2024 01:11:30.700 read: IOPS=1749, BW=13.7MiB/s (14.3MB/s)(68.4MiB/5003msec) 01:11:30.700 slat (nsec): min=7081, max=50453, avg=13919.92, stdev=4566.88 01:11:30.700 clat (usec): min=822, max=8734, avg=4529.14, stdev=761.68 01:11:30.700 lat (usec): min=836, max=8743, avg=4543.06, stdev=761.49 01:11:30.700 clat percentiles (usec): 01:11:30.700 | 1.00th=[ 2835], 5.00th=[ 3589], 10.00th=[ 3785], 20.00th=[ 4047], 01:11:30.700 | 30.00th=[ 4293], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4490], 01:11:30.700 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 5342], 95.00th=[ 6063], 01:11:30.700 | 99.00th=[ 7308], 99.50th=[ 7635], 99.90th=[ 8160], 99.95th=[ 8225], 01:11:30.700 | 99.99th=[ 8717] 01:11:30.700 bw ( KiB/s): min=13328, max=14640, per=24.67%, avg=13993.60, stdev=398.34, samples=10 01:11:30.700 iops : min= 1666, max= 1830, avg=1749.20, stdev=49.79, samples=10 01:11:30.700 lat (usec) : 1000=0.05% 01:11:30.700 lat (msec) : 2=0.21%, 4=16.90%, 10=82.85% 01:11:30.700 cpu : usr=93.64%, sys=5.98%, ctx=8, majf=0, minf=9 01:11:30.700 IO depths : 1=0.1%, 2=10.3%, 4=61.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:30.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:30.700 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:30.700 issued rwts: total=8751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:30.700 latency : target=0, window=0, percentile=100.00%, depth=8 01:11:30.700 filename1: (groupid=0, jobs=1): err= 0: pid=2648456: Mon Dec 9 11:22:30 2024 01:11:30.700 read: IOPS=1749, BW=13.7MiB/s (14.3MB/s)(68.4MiB/5002msec) 01:11:30.700 slat (nsec): min=7117, max=42103, avg=13486.46, stdev=4413.09 01:11:30.700 clat (usec): min=873, max=8171, avg=4528.76, stdev=664.65 01:11:30.700 lat (usec): min=889, max=8180, avg=4542.24, stdev=664.43 01:11:30.700 clat percentiles (usec): 01:11:30.700 | 1.00th=[ 3032], 5.00th=[ 3556], 10.00th=[ 3818], 20.00th=[ 4113], 01:11:30.700 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 01:11:30.700 | 70.00th=[ 4621], 80.00th=[ 4883], 90.00th=[ 5342], 95.00th=[ 5735], 01:11:30.700 | 99.00th=[ 6718], 99.50th=[ 7308], 99.90th=[ 7963], 99.95th=[ 8029], 01:11:30.700 | 99.99th=[ 8160] 01:11:30.700 bw ( KiB/s): min=13456, max=14560, per=24.66%, avg=13985.78, stdev=334.13, samples=9 01:11:30.700 iops : min= 1682, max= 1820, avg=1748.22, stdev=41.77, samples=9 01:11:30.700 lat (usec) : 1000=0.02% 01:11:30.700 lat (msec) : 2=0.08%, 4=15.18%, 10=84.72% 01:11:30.700 cpu : usr=93.24%, sys=6.36%, ctx=10, majf=0, minf=9 01:11:30.700 IO depths : 1=0.5%, 2=10.1%, 4=62.5%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:30.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:30.700 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:30.700 issued rwts: total=8751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:30.700 latency : target=0, window=0, percentile=100.00%, depth=8 01:11:30.700 filename1: (groupid=0, jobs=1): err= 0: pid=2648457: Mon Dec 9 11:22:30 2024 01:11:30.700 read: IOPS=1697, BW=13.3MiB/s (13.9MB/s)(66.3MiB/5002msec) 01:11:30.700 slat (nsec): min=7663, max=41726, avg=13411.86, stdev=4460.43 01:11:30.700 clat (usec): min=839, max=10266, avg=4670.67, stdev=777.25 01:11:30.700 lat (usec): min=850, max=10287, avg=4684.08, stdev=776.83 01:11:30.700 clat percentiles (usec): 01:11:30.700 | 1.00th=[ 2868], 5.00th=[ 3720], 10.00th=[ 4015], 20.00th=[ 4359], 01:11:30.700 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 01:11:30.700 | 70.00th=[ 4752], 80.00th=[ 5080], 90.00th=[ 5538], 95.00th=[ 6194], 01:11:30.700 | 99.00th=[ 7439], 99.50th=[ 7767], 99.90th=[ 8225], 99.95th=[ 8225], 01:11:30.700 | 99.99th=[10290] 01:11:30.700 bw ( KiB/s): min=13280, max=14124, per=23.93%, avg=13574.00, stdev=284.28, samples=10 01:11:30.700 iops : min= 1660, max= 1765, avg=1696.70, stdev=35.43, samples=10 01:11:30.700 lat (usec) : 1000=0.12% 01:11:30.700 lat (msec) : 2=0.32%, 4=9.55%, 10=90.00%, 20=0.01% 01:11:30.700 cpu : usr=93.14%, sys=6.48%, ctx=7, majf=0, minf=9 01:11:30.700 IO depths : 1=0.1%, 2=8.7%, 4=63.6%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:30.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:30.700 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:30.700 issued rwts: total=8490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:30.700 latency : target=0, window=0, percentile=100.00%, depth=8 01:11:30.700 01:11:30.700 Run status group 0 (all jobs): 01:11:30.700 READ: bw=55.4MiB/s (58.1MB/s), 13.3MiB/s-14.8MiB/s (13.9MB/s-15.5MB/s), io=277MiB (291MB), run=5002-5004msec 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:30.700 01:11:30.700 real 0m25.118s 01:11:30.700 user 4m39.337s 01:11:30.700 sys 0m9.106s 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:30.700 11:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:11:30.700 ************************************ 01:11:30.700 END TEST fio_dif_rand_params 01:11:30.700 ************************************ 01:11:30.700 11:22:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:11:30.700 11:22:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:11:30.700 11:22:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:11:30.700 11:22:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:11:30.700 ************************************ 01:11:30.700 START TEST fio_dif_digest 01:11:30.700 ************************************ 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:30.700 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:11:30.700 bdev_null0 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:11:30.701 [2024-12-09 11:22:31.326861] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:30.701 { 01:11:30.701 "params": { 01:11:30.701 "name": "Nvme$subsystem", 01:11:30.701 "trtype": "$TEST_TRANSPORT", 01:11:30.701 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:30.701 "adrfam": "ipv4", 01:11:30.701 "trsvcid": "$NVMF_PORT", 01:11:30.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:30.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:30.701 "hdgst": ${hdgst:-false}, 01:11:30.701 "ddgst": ${ddgst:-false} 01:11:30.701 }, 01:11:30.701 "method": "bdev_nvme_attach_controller" 01:11:30.701 } 01:11:30.701 EOF 01:11:30.701 )") 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:11:30.701 "params": { 01:11:30.701 "name": "Nvme0", 01:11:30.701 "trtype": "tcp", 01:11:30.701 "traddr": "10.0.0.2", 01:11:30.701 "adrfam": "ipv4", 01:11:30.701 "trsvcid": "4420", 01:11:30.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:11:30.701 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:11:30.701 "hdgst": true, 01:11:30.701 "ddgst": true 01:11:30.701 }, 01:11:30.701 "method": "bdev_nvme_attach_controller" 01:11:30.701 }' 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:11:30.701 11:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:11:30.701 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:11:30.701 ... 01:11:30.701 fio-3.35 01:11:30.701 Starting 3 threads 01:11:42.891 01:11:42.892 filename0: (groupid=0, jobs=1): err= 0: pid=2649313: Mon Dec 9 11:22:42 2024 01:11:42.892 read: IOPS=193, BW=24.2MiB/s (25.3MB/s)(243MiB/10048msec) 01:11:42.892 slat (nsec): min=2951, max=39073, avg=14744.08, stdev=2290.46 01:11:42.892 clat (usec): min=11949, max=54688, avg=15475.38, stdev=1628.81 01:11:42.892 lat (usec): min=11962, max=54704, avg=15490.13, stdev=1628.75 01:11:42.892 clat percentiles (usec): 01:11:42.892 | 1.00th=[12911], 5.00th=[13566], 10.00th=[13960], 20.00th=[14484], 01:11:42.892 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15795], 01:11:42.892 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16712], 95.00th=[17171], 01:11:42.892 | 99.00th=[18220], 99.50th=[18482], 99.90th=[50594], 99.95th=[54789], 01:11:42.892 | 99.99th=[54789] 01:11:42.892 bw ( KiB/s): min=23808, max=25856, per=35.02%, avg=24832.00, stdev=575.44, samples=20 01:11:42.892 iops : min= 186, max= 202, avg=194.00, stdev= 4.50, samples=20 01:11:42.892 lat (msec) : 20=99.74%, 50=0.15%, 100=0.10% 01:11:42.892 cpu : usr=91.84%, sys=7.82%, ctx=18, majf=0, minf=33 01:11:42.892 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:42.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:42.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:42.892 issued rwts: total=1943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:42.892 latency : target=0, window=0, percentile=100.00%, depth=3 01:11:42.892 filename0: (groupid=0, jobs=1): err= 0: pid=2649314: Mon Dec 9 11:22:42 2024 01:11:42.892 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(232MiB/10047msec) 01:11:42.892 slat (nsec): min=6278, max=36859, avg=12225.65, stdev=1955.51 01:11:42.892 clat (usec): min=12241, max=53646, avg=16209.89, stdev=1575.13 01:11:42.892 lat (usec): min=12249, max=53659, avg=16222.12, stdev=1575.22 01:11:42.892 clat percentiles (usec): 01:11:42.892 | 1.00th=[13698], 5.00th=[14484], 10.00th=[14877], 20.00th=[15270], 01:11:42.892 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16319], 01:11:42.892 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17695], 95.00th=[18220], 01:11:42.892 | 99.00th=[19006], 99.50th=[19268], 99.90th=[46924], 99.95th=[53740], 01:11:42.892 | 99.99th=[53740] 01:11:42.892 bw ( KiB/s): min=22784, max=24576, per=33.43%, avg=23705.60, stdev=487.14, samples=20 01:11:42.892 iops : min= 178, max= 192, avg=185.20, stdev= 3.81, samples=20 01:11:42.892 lat (msec) : 20=99.73%, 50=0.22%, 100=0.05% 01:11:42.892 cpu : usr=93.07%, sys=6.61%, ctx=18, majf=0, minf=59 01:11:42.892 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:42.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:42.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:42.892 issued rwts: total=1855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:42.892 latency : target=0, window=0, percentile=100.00%, depth=3 01:11:42.892 filename0: (groupid=0, jobs=1): err= 0: pid=2649315: Mon Dec 9 11:22:42 2024 01:11:42.892 read: IOPS=175, BW=22.0MiB/s (23.1MB/s)(221MiB/10047msec) 01:11:42.892 slat (nsec): min=6471, max=20203, avg=12421.24, stdev=1556.78 01:11:42.892 clat (usec): min=13227, max=57606, avg=17008.89, stdev=1689.68 01:11:42.892 lat (usec): min=13239, max=57619, avg=17021.31, stdev=1689.66 01:11:42.892 clat percentiles (usec): 01:11:42.892 | 1.00th=[14615], 5.00th=[15270], 10.00th=[15664], 20.00th=[16057], 01:11:42.892 | 30.00th=[16319], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 01:11:42.892 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18220], 95.00th=[18744], 01:11:42.892 | 99.00th=[19792], 99.50th=[20317], 99.90th=[51643], 99.95th=[57410], 01:11:42.892 | 99.99th=[57410] 01:11:42.892 bw ( KiB/s): min=20992, max=23552, per=31.86%, avg=22592.00, stdev=699.24, samples=20 01:11:42.892 iops : min= 164, max= 184, avg=176.50, stdev= 5.46, samples=20 01:11:42.892 lat (msec) : 20=99.38%, 50=0.51%, 100=0.11% 01:11:42.892 cpu : usr=92.83%, sys=6.85%, ctx=27, majf=0, minf=40 01:11:42.892 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:42.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:42.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:42.892 issued rwts: total=1768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:42.892 latency : target=0, window=0, percentile=100.00%, depth=3 01:11:42.892 01:11:42.892 Run status group 0 (all jobs): 01:11:42.892 READ: bw=69.2MiB/s (72.6MB/s), 22.0MiB/s-24.2MiB/s (23.1MB/s-25.3MB/s), io=696MiB (730MB), run=10047-10048msec 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:42.892 01:11:42.892 real 0m11.296s 01:11:42.892 user 0m31.528s 01:11:42.892 sys 0m2.474s 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:42.892 11:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:11:42.892 ************************************ 01:11:42.892 END TEST fio_dif_digest 01:11:42.892 ************************************ 01:11:42.892 11:22:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:11:42.892 11:22:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@121 -- # sync 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@124 -- # set +e 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:11:42.892 rmmod nvme_tcp 01:11:42.892 rmmod nvme_fabrics 01:11:42.892 rmmod nvme_keyring 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@128 -- # set -e 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@129 -- # return 0 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2642456 ']' 01:11:42.892 11:22:42 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2642456 01:11:42.892 11:22:42 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2642456 ']' 01:11:42.892 11:22:42 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2642456 01:11:42.892 11:22:42 nvmf_dif -- common/autotest_common.sh@959 -- # uname 01:11:42.892 11:22:42 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:42.892 11:22:42 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2642456 01:11:42.892 11:22:42 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:11:42.892 11:22:42 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:11:42.892 11:22:42 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2642456' 01:11:42.892 killing process with pid 2642456 01:11:42.892 11:22:42 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2642456 01:11:42.892 11:22:42 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2642456 01:11:42.892 11:22:43 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:11:42.892 11:22:43 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:11:45.419 Waiting for block devices as requested 01:11:45.419 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 01:11:45.419 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 01:11:45.419 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 01:11:45.419 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 01:11:45.419 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 01:11:45.419 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 01:11:45.676 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 01:11:45.676 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 01:11:45.934 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 01:11:45.934 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 01:11:45.934 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 01:11:46.193 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 01:11:46.193 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 01:11:46.193 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 01:11:46.451 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 01:11:46.451 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 01:11:46.451 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 01:11:46.708 11:22:47 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:11:46.708 11:22:47 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:11:46.708 11:22:47 nvmf_dif -- nvmf/common.sh@297 -- # iptr 01:11:46.708 11:22:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 01:11:46.708 11:22:47 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:11:46.708 11:22:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 01:11:46.708 11:22:47 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:11:46.708 11:22:47 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 01:11:46.709 11:22:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:46.709 11:22:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:11:46.709 11:22:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:48.608 11:22:49 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:11:48.608 01:11:48.608 real 1m16.922s 01:11:48.608 user 6m47.564s 01:11:48.608 sys 0m28.287s 01:11:48.608 11:22:49 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:48.608 11:22:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:11:48.608 ************************************ 01:11:48.608 END TEST nvmf_dif 01:11:48.608 ************************************ 01:11:48.866 11:22:49 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 01:11:48.866 11:22:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:11:48.866 11:22:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:11:48.866 11:22:49 -- common/autotest_common.sh@10 -- # set +x 01:11:48.866 ************************************ 01:11:48.866 START TEST nvmf_abort_qd_sizes 01:11:48.866 ************************************ 01:11:48.866 11:22:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 01:11:48.866 * Looking for test storage... 01:11:48.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:11:48.866 11:22:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:11:48.866 11:22:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 01:11:48.866 11:22:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 01:11:49.124 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:11:49.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:49.125 --rc genhtml_branch_coverage=1 01:11:49.125 --rc genhtml_function_coverage=1 01:11:49.125 --rc genhtml_legend=1 01:11:49.125 --rc geninfo_all_blocks=1 01:11:49.125 --rc geninfo_unexecuted_blocks=1 01:11:49.125 01:11:49.125 ' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:11:49.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:49.125 --rc genhtml_branch_coverage=1 01:11:49.125 --rc genhtml_function_coverage=1 01:11:49.125 --rc genhtml_legend=1 01:11:49.125 --rc geninfo_all_blocks=1 01:11:49.125 --rc geninfo_unexecuted_blocks=1 01:11:49.125 01:11:49.125 ' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:11:49.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:49.125 --rc genhtml_branch_coverage=1 01:11:49.125 --rc genhtml_function_coverage=1 01:11:49.125 --rc genhtml_legend=1 01:11:49.125 --rc geninfo_all_blocks=1 01:11:49.125 --rc geninfo_unexecuted_blocks=1 01:11:49.125 01:11:49.125 ' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:11:49.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:49.125 --rc genhtml_branch_coverage=1 01:11:49.125 --rc genhtml_function_coverage=1 01:11:49.125 --rc genhtml_legend=1 01:11:49.125 --rc geninfo_all_blocks=1 01:11:49.125 --rc geninfo_unexecuted_blocks=1 01:11:49.125 01:11:49.125 ' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:11:49.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 01:11:49.125 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 01:11:55.681 Found 0000:af:00.0 (0x8086 - 0x159b) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 01:11:55.681 Found 0000:af:00.1 (0x8086 - 0x159b) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 01:11:55.681 Found net devices under 0000:af:00.0: cvl_0_0 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 01:11:55.681 Found net devices under 0000:af:00.1: cvl_0_1 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:11:55.681 11:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:11:55.681 11:22:56 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:11:55.681 11:22:56 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:11:55.681 11:22:56 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:11:55.681 11:22:56 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:11:55.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:11:55.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 01:11:55.681 01:11:55.681 --- 10.0.0.2 ping statistics --- 01:11:55.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:55.682 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 01:11:55.682 11:22:56 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:11:55.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:11:55.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 01:11:55.682 01:11:55.682 --- 10.0.0.1 ping statistics --- 01:11:55.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:55.682 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 01:11:55.682 11:22:56 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:11:55.682 11:22:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 01:11:55.682 11:22:56 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:11:55.682 11:22:56 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:11:58.207 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 01:11:58.207 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 01:12:01.491 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2656331 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2656331 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2656331 ']' 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:12:01.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:12:01.491 11:23:02 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:12:01.491 [2024-12-09 11:23:02.341781] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:12:01.491 [2024-12-09 11:23:02.341864] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:12:01.491 [2024-12-09 11:23:02.474498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:12:01.491 [2024-12-09 11:23:02.529472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:12:01.491 [2024-12-09 11:23:02.529524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:12:01.491 [2024-12-09 11:23:02.529539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:12:01.491 [2024-12-09 11:23:02.529553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:12:01.491 [2024-12-09 11:23:02.529564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:12:01.491 [2024-12-09 11:23:02.531536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:12:01.491 [2024-12-09 11:23:02.531625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:12:01.491 [2024-12-09 11:23:02.531722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:12:01.491 [2024-12-09 11:23:02.531726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:12:02.057 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:12:02.315 11:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:12:02.315 ************************************ 01:12:02.315 START TEST spdk_target_abort 01:12:02.315 ************************************ 01:12:02.315 11:23:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 01:12:02.315 11:23:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:12:02.315 11:23:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 01:12:02.315 11:23:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:02.315 11:23:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:12:05.605 spdk_targetn1 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:12:05.605 [2024-12-09 11:23:06.133954] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:12:05.605 [2024-12-09 11:23:06.186268] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 01:12:05.605 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:12:05.606 11:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:12:08.897 Initializing NVMe Controllers 01:12:08.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:12:08.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:12:08.897 Initialization complete. Launching workers. 01:12:08.897 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14071, failed: 0 01:12:08.897 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1309, failed to submit 12762 01:12:08.897 success 710, unsuccessful 599, failed 0 01:12:08.897 11:23:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:12:08.897 11:23:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:12:12.191 Initializing NVMe Controllers 01:12:12.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:12:12.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:12:12.191 Initialization complete. Launching workers. 01:12:12.191 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8664, failed: 0 01:12:12.191 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1253, failed to submit 7411 01:12:12.191 success 325, unsuccessful 928, failed 0 01:12:12.191 11:23:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:12:12.191 11:23:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:12:15.490 Initializing NVMe Controllers 01:12:15.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:12:15.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:12:15.490 Initialization complete. Launching workers. 01:12:15.490 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36115, failed: 0 01:12:15.490 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2657, failed to submit 33458 01:12:15.490 success 576, unsuccessful 2081, failed 0 01:12:15.490 11:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:12:15.490 11:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:15.490 11:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:12:15.490 11:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:15.490 11:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:12:15.490 11:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:15.490 11:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:12:18.781 11:23:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:18.781 11:23:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2656331 01:12:18.781 11:23:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2656331 ']' 01:12:18.781 11:23:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2656331 01:12:18.781 11:23:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 01:12:18.781 11:23:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:12:18.781 11:23:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2656331 01:12:19.040 11:23:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:12:19.040 11:23:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:12:19.040 11:23:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2656331' 01:12:19.041 killing process with pid 2656331 01:12:19.041 11:23:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2656331 01:12:19.041 11:23:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2656331 01:12:19.300 01:12:19.300 real 0m16.988s 01:12:19.300 user 1m7.215s 01:12:19.300 sys 0m2.766s 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:12:19.300 ************************************ 01:12:19.300 END TEST spdk_target_abort 01:12:19.300 ************************************ 01:12:19.300 11:23:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:12:19.300 11:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:12:19.300 11:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:12:19.300 11:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:12:19.300 ************************************ 01:12:19.300 START TEST kernel_target_abort 01:12:19.300 ************************************ 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:12:19.300 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:12:19.301 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 01:12:19.301 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:12:19.301 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 01:12:19.301 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:12:19.301 11:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:12:21.901 Waiting for block devices as requested 01:12:21.901 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 01:12:21.901 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 01:12:21.901 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 01:12:22.245 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 01:12:22.245 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 01:12:22.245 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 01:12:22.245 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 01:12:22.600 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 01:12:22.600 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 01:12:22.600 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 01:12:22.600 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 01:12:22.859 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 01:12:22.859 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 01:12:22.859 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 01:12:23.118 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 01:12:23.118 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 01:12:23.118 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 01:12:23.378 No valid GPT data, bailing 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:12:23.378 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -a 10.0.0.1 -t tcp -s 4420 01:12:23.379 01:12:23.379 Discovery Log Number of Records 2, Generation counter 2 01:12:23.379 =====Discovery Log Entry 0====== 01:12:23.379 trtype: tcp 01:12:23.379 adrfam: ipv4 01:12:23.379 subtype: current discovery subsystem 01:12:23.379 treq: not specified, sq flow control disable supported 01:12:23.379 portid: 1 01:12:23.379 trsvcid: 4420 01:12:23.379 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:12:23.379 traddr: 10.0.0.1 01:12:23.379 eflags: none 01:12:23.379 sectype: none 01:12:23.379 =====Discovery Log Entry 1====== 01:12:23.379 trtype: tcp 01:12:23.379 adrfam: ipv4 01:12:23.379 subtype: nvme subsystem 01:12:23.379 treq: not specified, sq flow control disable supported 01:12:23.379 portid: 1 01:12:23.379 trsvcid: 4420 01:12:23.379 subnqn: nqn.2016-06.io.spdk:testnqn 01:12:23.379 traddr: 10.0.0.1 01:12:23.379 eflags: none 01:12:23.379 sectype: none 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:12:23.379 11:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:12:26.671 Initializing NVMe Controllers 01:12:26.671 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:12:26.671 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:12:26.671 Initialization complete. Launching workers. 01:12:26.671 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 49338, failed: 0 01:12:26.671 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 49338, failed to submit 0 01:12:26.671 success 0, unsuccessful 49338, failed 0 01:12:26.671 11:23:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:12:26.671 11:23:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:12:29.965 Initializing NVMe Controllers 01:12:29.965 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:12:29.965 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:12:29.965 Initialization complete. Launching workers. 01:12:29.965 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86067, failed: 0 01:12:29.965 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19578, failed to submit 66489 01:12:29.965 success 0, unsuccessful 19578, failed 0 01:12:29.965 11:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:12:29.965 11:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:12:33.262 Initializing NVMe Controllers 01:12:33.262 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:12:33.262 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:12:33.262 Initialization complete. Launching workers. 01:12:33.262 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80472, failed: 0 01:12:33.262 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20086, failed to submit 60386 01:12:33.262 success 0, unsuccessful 20086, failed 0 01:12:33.262 11:23:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:12:33.262 11:23:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:12:33.262 11:23:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 01:12:33.262 11:23:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:12:33.262 11:23:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:12:33.262 11:23:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:12:33.262 11:23:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:12:33.262 11:23:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:12:33.262 11:23:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:12:33.262 11:23:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:12:35.801 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 01:12:35.801 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 01:12:35.801 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 01:12:35.801 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 01:12:35.801 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 01:12:35.801 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 01:12:35.801 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 01:12:35.801 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 01:12:35.801 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 01:12:35.801 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 01:12:35.801 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 01:12:36.061 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 01:12:36.061 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 01:12:36.061 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 01:12:36.061 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 01:12:36.061 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 01:12:39.356 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 01:12:39.356 01:12:39.356 real 0m19.937s 01:12:39.356 user 0m8.422s 01:12:39.356 sys 0m5.121s 01:12:39.356 11:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:39.356 11:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:12:39.356 ************************************ 01:12:39.356 END TEST kernel_target_abort 01:12:39.356 ************************************ 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:12:39.356 rmmod nvme_tcp 01:12:39.356 rmmod nvme_fabrics 01:12:39.356 rmmod nvme_keyring 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2656331 ']' 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2656331 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2656331 ']' 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2656331 01:12:39.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2656331) - No such process 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2656331 is not found' 01:12:39.356 Process with pid 2656331 is not found 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:12:39.356 11:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:12:41.896 Waiting for block devices as requested 01:12:41.896 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 01:12:41.896 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 01:12:42.155 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 01:12:42.155 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 01:12:42.155 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 01:12:42.155 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 01:12:42.415 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 01:12:42.415 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 01:12:42.415 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 01:12:42.675 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 01:12:42.675 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 01:12:42.675 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 01:12:42.936 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 01:12:42.936 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 01:12:42.936 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 01:12:43.196 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 01:12:43.196 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:12:43.456 11:23:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:45.370 11:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:12:45.370 01:12:45.370 real 0m56.606s 01:12:45.370 user 1m20.146s 01:12:45.370 sys 0m16.541s 01:12:45.370 11:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:45.370 11:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:12:45.370 ************************************ 01:12:45.370 END TEST nvmf_abort_qd_sizes 01:12:45.370 ************************************ 01:12:45.370 11:23:46 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 01:12:45.370 11:23:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:12:45.370 11:23:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:12:45.370 11:23:46 -- common/autotest_common.sh@10 -- # set +x 01:12:45.370 ************************************ 01:12:45.370 START TEST keyring_file 01:12:45.370 ************************************ 01:12:45.629 11:23:46 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 01:12:45.629 * Looking for test storage... 01:12:45.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 01:12:45.629 11:23:46 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:12:45.629 11:23:46 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 01:12:45.629 11:23:46 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:12:45.629 11:23:46 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:12:45.629 11:23:46 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:12:45.629 11:23:46 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 01:12:45.629 11:23:46 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 01:12:45.629 11:23:46 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 01:12:45.629 11:23:46 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@344 -- # case "$op" in 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@345 -- # : 1 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@365 -- # decimal 1 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@353 -- # local d=1 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@355 -- # echo 1 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@366 -- # decimal 2 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@353 -- # local d=2 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@355 -- # echo 2 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@368 -- # return 0 01:12:45.630 11:23:46 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:12:45.630 11:23:46 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:12:45.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:45.630 --rc genhtml_branch_coverage=1 01:12:45.630 --rc genhtml_function_coverage=1 01:12:45.630 --rc genhtml_legend=1 01:12:45.630 --rc geninfo_all_blocks=1 01:12:45.630 --rc geninfo_unexecuted_blocks=1 01:12:45.630 01:12:45.630 ' 01:12:45.630 11:23:46 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:12:45.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:45.630 --rc genhtml_branch_coverage=1 01:12:45.630 --rc genhtml_function_coverage=1 01:12:45.630 --rc genhtml_legend=1 01:12:45.630 --rc geninfo_all_blocks=1 01:12:45.630 --rc geninfo_unexecuted_blocks=1 01:12:45.630 01:12:45.630 ' 01:12:45.630 11:23:46 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:12:45.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:45.630 --rc genhtml_branch_coverage=1 01:12:45.630 --rc genhtml_function_coverage=1 01:12:45.630 --rc genhtml_legend=1 01:12:45.630 --rc geninfo_all_blocks=1 01:12:45.630 --rc geninfo_unexecuted_blocks=1 01:12:45.630 01:12:45.630 ' 01:12:45.630 11:23:46 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:12:45.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:45.630 --rc genhtml_branch_coverage=1 01:12:45.630 --rc genhtml_function_coverage=1 01:12:45.630 --rc genhtml_legend=1 01:12:45.630 --rc geninfo_all_blocks=1 01:12:45.630 --rc geninfo_unexecuted_blocks=1 01:12:45.630 01:12:45.630 ' 01:12:45.630 11:23:46 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 01:12:45.630 11:23:46 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:45.630 11:23:46 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:45.630 11:23:46 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:45.630 11:23:46 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:45.630 11:23:46 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:45.630 11:23:46 keyring_file -- paths/export.sh@5 -- # export PATH 01:12:45.630 11:23:46 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@51 -- # : 0 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:12:45.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 01:12:45.630 11:23:46 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:12:45.630 11:23:46 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:12:45.630 11:23:46 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:12:45.630 11:23:46 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:12:45.630 11:23:46 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:12:45.630 11:23:46 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:12:45.630 11:23:46 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:12:45.630 11:23:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:12:45.630 11:23:46 keyring_file -- keyring/common.sh@17 -- # name=key0 01:12:45.630 11:23:46 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:12:45.630 11:23:46 keyring_file -- keyring/common.sh@17 -- # digest=0 01:12:45.630 11:23:46 keyring_file -- keyring/common.sh@18 -- # mktemp 01:12:45.630 11:23:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3yVyPYg7HC 01:12:45.630 11:23:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:12:45.630 11:23:46 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:12:45.631 11:23:46 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:12:45.631 11:23:46 keyring_file -- nvmf/common.sh@733 -- # python - 01:12:45.631 11:23:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3yVyPYg7HC 01:12:45.631 11:23:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3yVyPYg7HC 01:12:45.631 11:23:46 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.3yVyPYg7HC 01:12:45.631 11:23:46 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:12:45.631 11:23:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:12:45.631 11:23:46 keyring_file -- keyring/common.sh@17 -- # name=key1 01:12:45.631 11:23:46 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:12:45.631 11:23:46 keyring_file -- keyring/common.sh@17 -- # digest=0 01:12:45.631 11:23:46 keyring_file -- keyring/common.sh@18 -- # mktemp 01:12:45.631 11:23:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.q5SWfI4ZxH 01:12:45.890 11:23:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:12:45.890 11:23:46 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:12:45.890 11:23:46 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:12:45.890 11:23:46 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:12:45.890 11:23:46 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:12:45.890 11:23:46 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:12:45.890 11:23:46 keyring_file -- nvmf/common.sh@733 -- # python - 01:12:45.890 11:23:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.q5SWfI4ZxH 01:12:45.890 11:23:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.q5SWfI4ZxH 01:12:45.890 11:23:46 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.q5SWfI4ZxH 01:12:45.890 11:23:46 keyring_file -- keyring/file.sh@30 -- # tgtpid=2664087 01:12:45.890 11:23:46 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2664087 01:12:45.890 11:23:46 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2664087 ']' 01:12:45.890 11:23:46 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:12:45.890 11:23:46 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:45.890 11:23:46 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:12:45.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:12:45.890 11:23:46 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:45.890 11:23:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:12:45.890 11:23:46 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 01:12:45.890 [2024-12-09 11:23:46.929395] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:12:45.890 [2024-12-09 11:23:46.929473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664087 ] 01:12:45.890 [2024-12-09 11:23:47.058952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:12:46.149 [2024-12-09 11:23:47.111005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:12:46.409 11:23:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:46.409 11:23:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:12:46.409 11:23:47 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:12:46.409 11:23:47 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:46.409 11:23:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:12:46.409 [2024-12-09 11:23:47.357879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:12:46.409 null0 01:12:46.409 [2024-12-09 11:23:47.389925] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:12:46.409 [2024-12-09 11:23:47.390276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:12:46.409 11:23:47 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:46.410 11:23:47 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:12:46.410 [2024-12-09 11:23:47.417998] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:12:46.410 request: 01:12:46.410 { 01:12:46.410 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:12:46.410 "secure_channel": false, 01:12:46.410 "listen_address": { 01:12:46.410 "trtype": "tcp", 01:12:46.410 "traddr": "127.0.0.1", 01:12:46.410 "trsvcid": "4420" 01:12:46.410 }, 01:12:46.410 "method": "nvmf_subsystem_add_listener", 01:12:46.410 "req_id": 1 01:12:46.410 } 01:12:46.410 Got JSON-RPC error response 01:12:46.410 response: 01:12:46.410 { 01:12:46.410 "code": -32602, 01:12:46.410 "message": "Invalid parameters" 01:12:46.410 } 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:12:46.410 11:23:47 keyring_file -- keyring/file.sh@47 -- # bperfpid=2664217 01:12:46.410 11:23:47 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2664217 /var/tmp/bperf.sock 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2664217 ']' 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:12:46.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:46.410 11:23:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:12:46.410 11:23:47 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:12:46.410 [2024-12-09 11:23:47.480831] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:12:46.410 [2024-12-09 11:23:47.480904] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664217 ] 01:12:46.410 [2024-12-09 11:23:47.577285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:12:46.670 [2024-12-09 11:23:47.622745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:12:46.670 11:23:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:46.670 11:23:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:12:46.670 11:23:47 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3yVyPYg7HC 01:12:46.670 11:23:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3yVyPYg7HC 01:12:46.930 11:23:48 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.q5SWfI4ZxH 01:12:46.930 11:23:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.q5SWfI4ZxH 01:12:47.189 11:23:48 keyring_file -- keyring/file.sh@52 -- # get_key key0 01:12:47.189 11:23:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:12:47.189 11:23:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:47.189 11:23:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:12:47.189 11:23:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:47.448 11:23:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3yVyPYg7HC == \/\t\m\p\/\t\m\p\.\3\y\V\y\P\Y\g\7\H\C ]] 01:12:47.448 11:23:48 keyring_file -- keyring/file.sh@53 -- # jq -r .path 01:12:47.448 11:23:48 keyring_file -- keyring/file.sh@53 -- # get_key key1 01:12:47.448 11:23:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:47.448 11:23:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:47.448 11:23:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:12:47.708 11:23:48 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.q5SWfI4ZxH == \/\t\m\p\/\t\m\p\.\q\5\S\W\f\I\4\Z\x\H ]] 01:12:47.708 11:23:48 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 01:12:47.708 11:23:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:12:47.708 11:23:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:47.708 11:23:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:12:47.708 11:23:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:47.708 11:23:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:48.278 11:23:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:12:48.278 11:23:49 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 01:12:48.278 11:23:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:12:48.278 11:23:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:48.278 11:23:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:48.278 11:23:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:12:48.278 11:23:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:48.278 11:23:49 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 01:12:48.278 11:23:49 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:12:48.278 11:23:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:12:48.538 [2024-12-09 11:23:49.691443] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:12:48.797 nvme0n1 01:12:48.797 11:23:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 01:12:48.797 11:23:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:12:48.797 11:23:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:48.797 11:23:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:48.797 11:23:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:12:48.797 11:23:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:49.058 11:23:50 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 01:12:49.058 11:23:50 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 01:12:49.058 11:23:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:12:49.058 11:23:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:49.058 11:23:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:49.058 11:23:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:12:49.058 11:23:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:49.318 11:23:50 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 01:12:49.318 11:23:50 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:12:49.318 Running I/O for 1 seconds... 01:12:50.257 13407.00 IOPS, 52.37 MiB/s 01:12:50.257 Latency(us) 01:12:50.257 [2024-12-09T10:23:51.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:12:50.257 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:12:50.257 nvme0n1 : 1.01 13411.43 52.39 0.00 0.00 9498.50 7864.32 19717.79 01:12:50.257 [2024-12-09T10:23:51.433Z] =================================================================================================================== 01:12:50.257 [2024-12-09T10:23:51.433Z] Total : 13411.43 52.39 0.00 0.00 9498.50 7864.32 19717.79 01:12:50.257 { 01:12:50.257 "results": [ 01:12:50.257 { 01:12:50.257 "job": "nvme0n1", 01:12:50.257 "core_mask": "0x2", 01:12:50.257 "workload": "randrw", 01:12:50.257 "percentage": 50, 01:12:50.257 "status": "finished", 01:12:50.257 "queue_depth": 128, 01:12:50.257 "io_size": 4096, 01:12:50.257 "runtime": 1.009214, 01:12:50.257 "iops": 13411.427110602905, 01:12:50.257 "mibps": 52.388387150792596, 01:12:50.257 "io_failed": 0, 01:12:50.257 "io_timeout": 0, 01:12:50.257 "avg_latency_us": 9498.49591159795, 01:12:50.257 "min_latency_us": 7864.32, 01:12:50.257 "max_latency_us": 19717.787826086955 01:12:50.257 } 01:12:50.257 ], 01:12:50.257 "core_count": 1 01:12:50.257 } 01:12:50.257 11:23:51 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:12:50.258 11:23:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:12:50.827 11:23:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 01:12:50.827 11:23:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:12:50.827 11:23:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:50.827 11:23:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:50.827 11:23:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:50.827 11:23:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:12:50.827 11:23:51 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:12:50.827 11:23:51 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 01:12:50.827 11:23:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:12:50.827 11:23:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:50.827 11:23:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:50.827 11:23:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:12:50.827 11:23:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:51.397 11:23:52 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 01:12:51.397 11:23:52 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:12:51.397 11:23:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:12:51.397 [2024-12-09 11:23:52.546769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:12:51.397 [2024-12-09 11:23:52.546800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddd6a0 (107): Transport endpoint is not connected 01:12:51.397 [2024-12-09 11:23:52.547793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddd6a0 (9): Bad file descriptor 01:12:51.397 [2024-12-09 11:23:52.548794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:12:51.397 [2024-12-09 11:23:52.548809] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:12:51.397 [2024-12-09 11:23:52.548818] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:12:51.397 [2024-12-09 11:23:52.548830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:12:51.397 request: 01:12:51.397 { 01:12:51.397 "name": "nvme0", 01:12:51.397 "trtype": "tcp", 01:12:51.397 "traddr": "127.0.0.1", 01:12:51.397 "adrfam": "ipv4", 01:12:51.397 "trsvcid": "4420", 01:12:51.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:12:51.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:12:51.397 "prchk_reftag": false, 01:12:51.397 "prchk_guard": false, 01:12:51.397 "hdgst": false, 01:12:51.397 "ddgst": false, 01:12:51.397 "psk": "key1", 01:12:51.397 "allow_unrecognized_csi": false, 01:12:51.397 "method": "bdev_nvme_attach_controller", 01:12:51.397 "req_id": 1 01:12:51.397 } 01:12:51.397 Got JSON-RPC error response 01:12:51.397 response: 01:12:51.397 { 01:12:51.397 "code": -5, 01:12:51.397 "message": "Input/output error" 01:12:51.397 } 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:12:51.397 11:23:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:12:51.397 11:23:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 01:12:51.397 11:23:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:12:51.397 11:23:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:51.657 11:23:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:51.657 11:23:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:51.657 11:23:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:12:51.917 11:23:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:12:51.917 11:23:52 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 01:12:51.917 11:23:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:12:51.917 11:23:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:51.917 11:23:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:51.917 11:23:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:12:51.917 11:23:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:52.177 11:23:53 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 01:12:52.177 11:23:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 01:12:52.177 11:23:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:12:52.436 11:23:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 01:12:52.436 11:23:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:12:52.696 11:23:53 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 01:12:52.696 11:23:53 keyring_file -- keyring/file.sh@78 -- # jq length 01:12:52.696 11:23:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:52.956 11:23:53 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 01:12:52.956 11:23:53 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.3yVyPYg7HC 01:12:52.956 11:23:53 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.3yVyPYg7HC 01:12:52.956 11:23:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:12:52.956 11:23:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.3yVyPYg7HC 01:12:52.956 11:23:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:12:52.956 11:23:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:12:52.956 11:23:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:12:52.956 11:23:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:12:52.956 11:23:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3yVyPYg7HC 01:12:52.956 11:23:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3yVyPYg7HC 01:12:53.216 [2024-12-09 11:23:54.253203] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3yVyPYg7HC': 0100660 01:12:53.216 [2024-12-09 11:23:54.253235] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:12:53.216 request: 01:12:53.216 { 01:12:53.216 "name": "key0", 01:12:53.216 "path": "/tmp/tmp.3yVyPYg7HC", 01:12:53.216 "method": "keyring_file_add_key", 01:12:53.216 "req_id": 1 01:12:53.216 } 01:12:53.216 Got JSON-RPC error response 01:12:53.216 response: 01:12:53.216 { 01:12:53.216 "code": -1, 01:12:53.216 "message": "Operation not permitted" 01:12:53.216 } 01:12:53.216 11:23:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:12:53.216 11:23:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:12:53.216 11:23:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:12:53.216 11:23:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:12:53.216 11:23:54 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.3yVyPYg7HC 01:12:53.216 11:23:54 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3yVyPYg7HC 01:12:53.216 11:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3yVyPYg7HC 01:12:53.475 11:23:54 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.3yVyPYg7HC 01:12:53.475 11:23:54 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 01:12:53.475 11:23:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:12:53.475 11:23:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:53.475 11:23:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:53.475 11:23:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:12:53.475 11:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:53.735 11:23:54 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 01:12:53.735 11:23:54 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:12:53.735 11:23:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:12:53.735 11:23:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:12:53.735 11:23:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:12:53.735 11:23:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:12:53.735 11:23:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:12:53.735 11:23:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:12:53.735 11:23:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:12:53.735 11:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:12:53.993 [2024-12-09 11:23:55.115405] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.3yVyPYg7HC': No such file or directory 01:12:53.993 [2024-12-09 11:23:55.115439] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:12:53.993 [2024-12-09 11:23:55.115458] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:12:53.993 [2024-12-09 11:23:55.115467] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 01:12:53.993 [2024-12-09 11:23:55.115478] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:12:53.993 [2024-12-09 11:23:55.115486] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:12:53.993 request: 01:12:53.993 { 01:12:53.993 "name": "nvme0", 01:12:53.993 "trtype": "tcp", 01:12:53.993 "traddr": "127.0.0.1", 01:12:53.993 "adrfam": "ipv4", 01:12:53.993 "trsvcid": "4420", 01:12:53.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:12:53.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:12:53.993 "prchk_reftag": false, 01:12:53.993 "prchk_guard": false, 01:12:53.993 "hdgst": false, 01:12:53.993 "ddgst": false, 01:12:53.993 "psk": "key0", 01:12:53.993 "allow_unrecognized_csi": false, 01:12:53.993 "method": "bdev_nvme_attach_controller", 01:12:53.993 "req_id": 1 01:12:53.993 } 01:12:53.993 Got JSON-RPC error response 01:12:53.993 response: 01:12:53.993 { 01:12:53.993 "code": -19, 01:12:53.993 "message": "No such device" 01:12:53.993 } 01:12:53.993 11:23:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:12:53.993 11:23:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:12:53.993 11:23:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:12:53.993 11:23:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:12:53.993 11:23:55 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 01:12:53.993 11:23:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:12:54.252 11:23:55 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:12:54.252 11:23:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:12:54.252 11:23:55 keyring_file -- keyring/common.sh@17 -- # name=key0 01:12:54.252 11:23:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:12:54.252 11:23:55 keyring_file -- keyring/common.sh@17 -- # digest=0 01:12:54.252 11:23:55 keyring_file -- keyring/common.sh@18 -- # mktemp 01:12:54.252 11:23:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.e7jtmuGzRP 01:12:54.252 11:23:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:12:54.252 11:23:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:12:54.511 11:23:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:12:54.511 11:23:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:12:54.511 11:23:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:12:54.511 11:23:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:12:54.511 11:23:55 keyring_file -- nvmf/common.sh@733 -- # python - 01:12:54.511 11:23:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.e7jtmuGzRP 01:12:54.511 11:23:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.e7jtmuGzRP 01:12:54.511 11:23:55 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.e7jtmuGzRP 01:12:54.511 11:23:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.e7jtmuGzRP 01:12:54.511 11:23:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.e7jtmuGzRP 01:12:54.771 11:23:55 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:12:54.771 11:23:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:12:55.031 nvme0n1 01:12:55.031 11:23:56 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 01:12:55.031 11:23:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:12:55.031 11:23:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:55.031 11:23:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:55.031 11:23:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:55.031 11:23:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:12:55.291 11:23:56 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 01:12:55.291 11:23:56 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 01:12:55.291 11:23:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:12:55.550 11:23:56 keyring_file -- keyring/file.sh@102 -- # get_key key0 01:12:55.550 11:23:56 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 01:12:55.550 11:23:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:55.550 11:23:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:12:55.551 11:23:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:55.810 11:23:56 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 01:12:55.810 11:23:56 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 01:12:55.810 11:23:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:12:55.810 11:23:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:55.810 11:23:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:55.810 11:23:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:12:55.810 11:23:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:56.070 11:23:57 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 01:12:56.070 11:23:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:12:56.070 11:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:12:56.329 11:23:57 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 01:12:56.329 11:23:57 keyring_file -- keyring/file.sh@105 -- # jq length 01:12:56.329 11:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:56.588 11:23:57 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 01:12:56.588 11:23:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.e7jtmuGzRP 01:12:56.588 11:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.e7jtmuGzRP 01:12:56.847 11:23:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.q5SWfI4ZxH 01:12:56.847 11:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.q5SWfI4ZxH 01:12:57.416 11:23:58 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:12:57.416 11:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:12:57.676 nvme0n1 01:12:57.676 11:23:58 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 01:12:57.676 11:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:12:57.937 11:23:58 keyring_file -- keyring/file.sh@113 -- # config='{ 01:12:57.937 "subsystems": [ 01:12:57.937 { 01:12:57.937 "subsystem": "keyring", 01:12:57.937 "config": [ 01:12:57.937 { 01:12:57.937 "method": "keyring_file_add_key", 01:12:57.937 "params": { 01:12:57.937 "name": "key0", 01:12:57.937 "path": "/tmp/tmp.e7jtmuGzRP" 01:12:57.937 } 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "method": "keyring_file_add_key", 01:12:57.937 "params": { 01:12:57.937 "name": "key1", 01:12:57.937 "path": "/tmp/tmp.q5SWfI4ZxH" 01:12:57.937 } 01:12:57.937 } 01:12:57.937 ] 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "subsystem": "iobuf", 01:12:57.937 "config": [ 01:12:57.937 { 01:12:57.937 "method": "iobuf_set_options", 01:12:57.937 "params": { 01:12:57.937 "small_pool_count": 8192, 01:12:57.937 "large_pool_count": 1024, 01:12:57.937 "small_bufsize": 8192, 01:12:57.937 "large_bufsize": 135168, 01:12:57.937 "enable_numa": false 01:12:57.937 } 01:12:57.937 } 01:12:57.937 ] 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "subsystem": "sock", 01:12:57.937 "config": [ 01:12:57.937 { 01:12:57.937 "method": "sock_set_default_impl", 01:12:57.937 "params": { 01:12:57.937 "impl_name": "posix" 01:12:57.937 } 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "method": "sock_impl_set_options", 01:12:57.937 "params": { 01:12:57.937 "impl_name": "ssl", 01:12:57.937 "recv_buf_size": 4096, 01:12:57.937 "send_buf_size": 4096, 01:12:57.937 "enable_recv_pipe": true, 01:12:57.937 "enable_quickack": false, 01:12:57.937 "enable_placement_id": 0, 01:12:57.937 "enable_zerocopy_send_server": true, 01:12:57.937 "enable_zerocopy_send_client": false, 01:12:57.937 "zerocopy_threshold": 0, 01:12:57.937 "tls_version": 0, 01:12:57.937 "enable_ktls": false 01:12:57.937 } 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "method": "sock_impl_set_options", 01:12:57.937 "params": { 01:12:57.937 "impl_name": "posix", 01:12:57.937 "recv_buf_size": 2097152, 01:12:57.937 "send_buf_size": 2097152, 01:12:57.937 "enable_recv_pipe": true, 01:12:57.937 "enable_quickack": false, 01:12:57.937 "enable_placement_id": 0, 01:12:57.937 "enable_zerocopy_send_server": true, 01:12:57.937 "enable_zerocopy_send_client": false, 01:12:57.937 "zerocopy_threshold": 0, 01:12:57.937 "tls_version": 0, 01:12:57.937 "enable_ktls": false 01:12:57.937 } 01:12:57.937 } 01:12:57.937 ] 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "subsystem": "vmd", 01:12:57.937 "config": [] 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "subsystem": "accel", 01:12:57.937 "config": [ 01:12:57.937 { 01:12:57.937 "method": "accel_set_options", 01:12:57.937 "params": { 01:12:57.937 "small_cache_size": 128, 01:12:57.937 "large_cache_size": 16, 01:12:57.937 "task_count": 2048, 01:12:57.937 "sequence_count": 2048, 01:12:57.937 "buf_count": 2048 01:12:57.937 } 01:12:57.937 } 01:12:57.937 ] 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "subsystem": "bdev", 01:12:57.937 "config": [ 01:12:57.937 { 01:12:57.937 "method": "bdev_set_options", 01:12:57.937 "params": { 01:12:57.937 "bdev_io_pool_size": 65535, 01:12:57.937 "bdev_io_cache_size": 256, 01:12:57.937 "bdev_auto_examine": true, 01:12:57.937 "iobuf_small_cache_size": 128, 01:12:57.937 "iobuf_large_cache_size": 16 01:12:57.937 } 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "method": "bdev_raid_set_options", 01:12:57.937 "params": { 01:12:57.937 "process_window_size_kb": 1024, 01:12:57.937 "process_max_bandwidth_mb_sec": 0 01:12:57.937 } 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "method": "bdev_iscsi_set_options", 01:12:57.937 "params": { 01:12:57.937 "timeout_sec": 30 01:12:57.937 } 01:12:57.937 }, 01:12:57.937 { 01:12:57.937 "method": "bdev_nvme_set_options", 01:12:57.937 "params": { 01:12:57.937 "action_on_timeout": "none", 01:12:57.937 "timeout_us": 0, 01:12:57.937 "timeout_admin_us": 0, 01:12:57.937 "keep_alive_timeout_ms": 10000, 01:12:57.937 "arbitration_burst": 0, 01:12:57.937 "low_priority_weight": 0, 01:12:57.937 "medium_priority_weight": 0, 01:12:57.937 "high_priority_weight": 0, 01:12:57.937 "nvme_adminq_poll_period_us": 10000, 01:12:57.937 "nvme_ioq_poll_period_us": 0, 01:12:57.937 "io_queue_requests": 512, 01:12:57.937 "delay_cmd_submit": true, 01:12:57.938 "transport_retry_count": 4, 01:12:57.938 "bdev_retry_count": 3, 01:12:57.938 "transport_ack_timeout": 0, 01:12:57.938 "ctrlr_loss_timeout_sec": 0, 01:12:57.938 "reconnect_delay_sec": 0, 01:12:57.938 "fast_io_fail_timeout_sec": 0, 01:12:57.938 "disable_auto_failback": false, 01:12:57.938 "generate_uuids": false, 01:12:57.938 "transport_tos": 0, 01:12:57.938 "nvme_error_stat": false, 01:12:57.938 "rdma_srq_size": 0, 01:12:57.938 "io_path_stat": false, 01:12:57.938 "allow_accel_sequence": false, 01:12:57.938 "rdma_max_cq_size": 0, 01:12:57.938 "rdma_cm_event_timeout_ms": 0, 01:12:57.938 "dhchap_digests": [ 01:12:57.938 "sha256", 01:12:57.938 "sha384", 01:12:57.938 "sha512" 01:12:57.938 ], 01:12:57.938 "dhchap_dhgroups": [ 01:12:57.938 "null", 01:12:57.938 "ffdhe2048", 01:12:57.938 "ffdhe3072", 01:12:57.938 "ffdhe4096", 01:12:57.938 "ffdhe6144", 01:12:57.938 "ffdhe8192" 01:12:57.938 ] 01:12:57.938 } 01:12:57.938 }, 01:12:57.938 { 01:12:57.938 "method": "bdev_nvme_attach_controller", 01:12:57.938 "params": { 01:12:57.938 "name": "nvme0", 01:12:57.938 "trtype": "TCP", 01:12:57.938 "adrfam": "IPv4", 01:12:57.938 "traddr": "127.0.0.1", 01:12:57.938 "trsvcid": "4420", 01:12:57.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:12:57.938 "prchk_reftag": false, 01:12:57.938 "prchk_guard": false, 01:12:57.938 "ctrlr_loss_timeout_sec": 0, 01:12:57.938 "reconnect_delay_sec": 0, 01:12:57.938 "fast_io_fail_timeout_sec": 0, 01:12:57.938 "psk": "key0", 01:12:57.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:12:57.938 "hdgst": false, 01:12:57.938 "ddgst": false, 01:12:57.938 "multipath": "multipath" 01:12:57.938 } 01:12:57.938 }, 01:12:57.938 { 01:12:57.938 "method": "bdev_nvme_set_hotplug", 01:12:57.938 "params": { 01:12:57.938 "period_us": 100000, 01:12:57.938 "enable": false 01:12:57.938 } 01:12:57.938 }, 01:12:57.938 { 01:12:57.938 "method": "bdev_wait_for_examine" 01:12:57.938 } 01:12:57.938 ] 01:12:57.938 }, 01:12:57.938 { 01:12:57.938 "subsystem": "nbd", 01:12:57.938 "config": [] 01:12:57.938 } 01:12:57.938 ] 01:12:57.938 }' 01:12:57.938 11:23:58 keyring_file -- keyring/file.sh@115 -- # killprocess 2664217 01:12:57.938 11:23:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2664217 ']' 01:12:57.938 11:23:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2664217 01:12:57.938 11:23:58 keyring_file -- common/autotest_common.sh@959 -- # uname 01:12:57.938 11:23:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:12:57.938 11:23:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2664217 01:12:57.938 11:23:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:12:57.938 11:23:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:12:57.938 11:23:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2664217' 01:12:57.938 killing process with pid 2664217 01:12:57.938 11:23:59 keyring_file -- common/autotest_common.sh@973 -- # kill 2664217 01:12:57.938 Received shutdown signal, test time was about 1.000000 seconds 01:12:57.938 01:12:57.938 Latency(us) 01:12:57.938 [2024-12-09T10:23:59.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:12:57.938 [2024-12-09T10:23:59.114Z] =================================================================================================================== 01:12:57.938 [2024-12-09T10:23:59.114Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:12:57.938 11:23:59 keyring_file -- common/autotest_common.sh@978 -- # wait 2664217 01:12:58.198 11:23:59 keyring_file -- keyring/file.sh@118 -- # bperfpid=2665823 01:12:58.198 11:23:59 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2665823 /var/tmp/bperf.sock 01:12:58.198 11:23:59 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2665823 ']' 01:12:58.198 11:23:59 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:12:58.198 11:23:59 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:58.198 11:23:59 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:12:58.198 11:23:59 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:12:58.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:12:58.198 11:23:59 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:58.198 11:23:59 keyring_file -- keyring/file.sh@116 -- # echo '{ 01:12:58.198 "subsystems": [ 01:12:58.198 { 01:12:58.198 "subsystem": "keyring", 01:12:58.198 "config": [ 01:12:58.198 { 01:12:58.198 "method": "keyring_file_add_key", 01:12:58.198 "params": { 01:12:58.198 "name": "key0", 01:12:58.198 "path": "/tmp/tmp.e7jtmuGzRP" 01:12:58.198 } 01:12:58.198 }, 01:12:58.198 { 01:12:58.198 "method": "keyring_file_add_key", 01:12:58.198 "params": { 01:12:58.198 "name": "key1", 01:12:58.198 "path": "/tmp/tmp.q5SWfI4ZxH" 01:12:58.198 } 01:12:58.198 } 01:12:58.198 ] 01:12:58.198 }, 01:12:58.198 { 01:12:58.198 "subsystem": "iobuf", 01:12:58.198 "config": [ 01:12:58.198 { 01:12:58.198 "method": "iobuf_set_options", 01:12:58.198 "params": { 01:12:58.198 "small_pool_count": 8192, 01:12:58.198 "large_pool_count": 1024, 01:12:58.198 "small_bufsize": 8192, 01:12:58.198 "large_bufsize": 135168, 01:12:58.198 "enable_numa": false 01:12:58.198 } 01:12:58.198 } 01:12:58.198 ] 01:12:58.198 }, 01:12:58.198 { 01:12:58.198 "subsystem": "sock", 01:12:58.198 "config": [ 01:12:58.198 { 01:12:58.198 "method": "sock_set_default_impl", 01:12:58.198 "params": { 01:12:58.198 "impl_name": "posix" 01:12:58.198 } 01:12:58.198 }, 01:12:58.198 { 01:12:58.198 "method": "sock_impl_set_options", 01:12:58.198 "params": { 01:12:58.198 "impl_name": "ssl", 01:12:58.198 "recv_buf_size": 4096, 01:12:58.198 "send_buf_size": 4096, 01:12:58.199 "enable_recv_pipe": true, 01:12:58.199 "enable_quickack": false, 01:12:58.199 "enable_placement_id": 0, 01:12:58.199 "enable_zerocopy_send_server": true, 01:12:58.199 "enable_zerocopy_send_client": false, 01:12:58.199 "zerocopy_threshold": 0, 01:12:58.199 "tls_version": 0, 01:12:58.199 "enable_ktls": false 01:12:58.199 } 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "method": "sock_impl_set_options", 01:12:58.199 "params": { 01:12:58.199 "impl_name": "posix", 01:12:58.199 "recv_buf_size": 2097152, 01:12:58.199 "send_buf_size": 2097152, 01:12:58.199 "enable_recv_pipe": true, 01:12:58.199 "enable_quickack": false, 01:12:58.199 "enable_placement_id": 0, 01:12:58.199 "enable_zerocopy_send_server": true, 01:12:58.199 "enable_zerocopy_send_client": false, 01:12:58.199 "zerocopy_threshold": 0, 01:12:58.199 "tls_version": 0, 01:12:58.199 "enable_ktls": false 01:12:58.199 } 01:12:58.199 } 01:12:58.199 ] 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "subsystem": "vmd", 01:12:58.199 "config": [] 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "subsystem": "accel", 01:12:58.199 "config": [ 01:12:58.199 { 01:12:58.199 "method": "accel_set_options", 01:12:58.199 "params": { 01:12:58.199 "small_cache_size": 128, 01:12:58.199 "large_cache_size": 16, 01:12:58.199 "task_count": 2048, 01:12:58.199 "sequence_count": 2048, 01:12:58.199 "buf_count": 2048 01:12:58.199 } 01:12:58.199 } 01:12:58.199 ] 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "subsystem": "bdev", 01:12:58.199 "config": [ 01:12:58.199 { 01:12:58.199 "method": "bdev_set_options", 01:12:58.199 "params": { 01:12:58.199 "bdev_io_pool_size": 65535, 01:12:58.199 "bdev_io_cache_size": 256, 01:12:58.199 "bdev_auto_examine": true, 01:12:58.199 "iobuf_small_cache_size": 128, 01:12:58.199 "iobuf_large_cache_size": 16 01:12:58.199 } 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "method": "bdev_raid_set_options", 01:12:58.199 "params": { 01:12:58.199 "process_window_size_kb": 1024, 01:12:58.199 "process_max_bandwidth_mb_sec": 0 01:12:58.199 } 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "method": "bdev_iscsi_set_options", 01:12:58.199 "params": { 01:12:58.199 "timeout_sec": 30 01:12:58.199 } 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "method": "bdev_nvme_set_options", 01:12:58.199 "params": { 01:12:58.199 "action_on_timeout": "none", 01:12:58.199 "timeout_us": 0, 01:12:58.199 "timeout_admin_us": 0, 01:12:58.199 "keep_alive_timeout_ms": 10000, 01:12:58.199 "arbitration_burst": 0, 01:12:58.199 "low_priority_weight": 0, 01:12:58.199 "medium_priority_weight": 0, 01:12:58.199 "high_priority_weight": 0, 01:12:58.199 "nvme_adminq_poll_period_us": 10000, 01:12:58.199 "nvme_ioq_poll_period_us": 0, 01:12:58.199 "io_queue_requests": 512, 01:12:58.199 "delay_cmd_submit": true, 01:12:58.199 "transport_retry_count": 4, 01:12:58.199 "bdev_retry_count": 3, 01:12:58.199 "transport_ack_timeout": 0, 01:12:58.199 "ctrlr_loss_timeout_sec": 0, 01:12:58.199 "reconnect_delay_sec": 0, 01:12:58.199 "fast_io_fail_timeout_sec": 0, 01:12:58.199 "disable_auto_failback": false, 01:12:58.199 "generate_uuids": false, 01:12:58.199 "transport_tos": 0, 01:12:58.199 "nvme_error_stat": false, 01:12:58.199 "rdma_srq_size": 0, 01:12:58.199 "io_path_stat": false, 01:12:58.199 "allow_accel_sequence": false, 01:12:58.199 "rdma_max_cq_size": 0, 01:12:58.199 "rdma_cm_event_timeout_ms": 0, 01:12:58.199 "dhchap_digests": [ 01:12:58.199 "sha256", 01:12:58.199 "sha384", 01:12:58.199 "sha512" 01:12:58.199 ], 01:12:58.199 "dhchap_dhgroups": [ 01:12:58.199 "null", 01:12:58.199 "ffdhe2048", 01:12:58.199 "ffdhe3072", 01:12:58.199 "ffdhe4096", 01:12:58.199 "ffdhe6144", 01:12:58.199 "ffdhe8192" 01:12:58.199 ] 01:12:58.199 } 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "method": "bdev_nvme_attach_controller", 01:12:58.199 "params": { 01:12:58.199 "name": "nvme0", 01:12:58.199 "trtype": "TCP", 01:12:58.199 "adrfam": "IPv4", 01:12:58.199 "traddr": "127.0.0.1", 01:12:58.199 "trsvcid": "4420", 01:12:58.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:12:58.199 "prchk_reftag": false, 01:12:58.199 "prchk_guard": false, 01:12:58.199 "ctrlr_loss_timeout_sec": 0, 01:12:58.199 "reconnect_delay_sec": 0, 01:12:58.199 "fast_io_fail_timeout_sec": 0, 01:12:58.199 "psk": "key0", 01:12:58.199 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:12:58.199 "hdgst": false, 01:12:58.199 "ddgst": false, 01:12:58.199 "multipath": "multipath" 01:12:58.199 } 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "method": "bdev_nvme_set_hotplug", 01:12:58.199 "params": { 01:12:58.199 "period_us": 100000, 01:12:58.199 "enable": false 01:12:58.199 } 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "method": "bdev_wait_for_examine" 01:12:58.199 } 01:12:58.199 ] 01:12:58.199 }, 01:12:58.199 { 01:12:58.199 "subsystem": "nbd", 01:12:58.199 "config": [] 01:12:58.199 } 01:12:58.199 ] 01:12:58.199 }' 01:12:58.199 11:23:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:12:58.199 [2024-12-09 11:23:59.283575] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:12:58.199 [2024-12-09 11:23:59.283665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665823 ] 01:12:58.459 [2024-12-09 11:23:59.377428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:12:58.459 [2024-12-09 11:23:59.419359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:12:58.459 [2024-12-09 11:23:59.583684] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:12:59.398 11:24:00 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:59.398 11:24:00 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:12:59.398 11:24:00 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 01:12:59.398 11:24:00 keyring_file -- keyring/file.sh@121 -- # jq length 01:12:59.398 11:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:59.398 11:24:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:12:59.398 11:24:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 01:12:59.398 11:24:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:59.398 11:24:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:12:59.398 11:24:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:59.398 11:24:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:12:59.398 11:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:12:59.657 11:24:00 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 01:12:59.657 11:24:00 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 01:12:59.657 11:24:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:12:59.657 11:24:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:12:59.657 11:24:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:12:59.657 11:24:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:12:59.917 11:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:13:00.176 11:24:01 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 01:13:00.176 11:24:01 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 01:13:00.176 11:24:01 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 01:13:00.176 11:24:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:13:00.435 11:24:01 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 01:13:00.435 11:24:01 keyring_file -- keyring/file.sh@1 -- # cleanup 01:13:00.435 11:24:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.e7jtmuGzRP /tmp/tmp.q5SWfI4ZxH 01:13:00.435 11:24:01 keyring_file -- keyring/file.sh@20 -- # killprocess 2665823 01:13:00.435 11:24:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2665823 ']' 01:13:00.435 11:24:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2665823 01:13:00.436 11:24:01 keyring_file -- common/autotest_common.sh@959 -- # uname 01:13:00.436 11:24:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:00.436 11:24:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2665823 01:13:00.436 11:24:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:13:00.436 11:24:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:13:00.436 11:24:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2665823' 01:13:00.436 killing process with pid 2665823 01:13:00.436 11:24:01 keyring_file -- common/autotest_common.sh@973 -- # kill 2665823 01:13:00.436 Received shutdown signal, test time was about 1.000000 seconds 01:13:00.436 01:13:00.436 Latency(us) 01:13:00.436 [2024-12-09T10:24:01.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:00.436 [2024-12-09T10:24:01.612Z] =================================================================================================================== 01:13:00.436 [2024-12-09T10:24:01.612Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:13:00.436 11:24:01 keyring_file -- common/autotest_common.sh@978 -- # wait 2665823 01:13:00.695 11:24:01 keyring_file -- keyring/file.sh@21 -- # killprocess 2664087 01:13:00.695 11:24:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2664087 ']' 01:13:00.695 11:24:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2664087 01:13:00.695 11:24:01 keyring_file -- common/autotest_common.sh@959 -- # uname 01:13:00.695 11:24:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:00.695 11:24:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2664087 01:13:00.695 11:24:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:13:00.695 11:24:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:13:00.695 11:24:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2664087' 01:13:00.695 killing process with pid 2664087 01:13:00.695 11:24:01 keyring_file -- common/autotest_common.sh@973 -- # kill 2664087 01:13:00.695 11:24:01 keyring_file -- common/autotest_common.sh@978 -- # wait 2664087 01:13:01.263 01:13:01.263 real 0m15.639s 01:13:01.263 user 0m39.158s 01:13:01.264 sys 0m3.858s 01:13:01.264 11:24:02 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:13:01.264 11:24:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:13:01.264 ************************************ 01:13:01.264 END TEST keyring_file 01:13:01.264 ************************************ 01:13:01.264 11:24:02 -- spdk/autotest.sh@293 -- # [[ y == y ]] 01:13:01.264 11:24:02 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 01:13:01.264 11:24:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:13:01.264 11:24:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:13:01.264 11:24:02 -- common/autotest_common.sh@10 -- # set +x 01:13:01.264 ************************************ 01:13:01.264 START TEST keyring_linux 01:13:01.264 ************************************ 01:13:01.264 11:24:02 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 01:13:01.264 Joined session keyring: 331779220 01:13:01.264 * Looking for test storage... 01:13:01.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 01:13:01.264 11:24:02 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:13:01.264 11:24:02 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 01:13:01.264 11:24:02 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:13:01.523 11:24:02 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@345 -- # : 1 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@365 -- # decimal 1 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@353 -- # local d=1 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@355 -- # echo 1 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@366 -- # decimal 2 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@353 -- # local d=2 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@355 -- # echo 2 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@368 -- # return 0 01:13:01.523 11:24:02 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:13:01.523 11:24:02 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:13:01.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:01.523 --rc genhtml_branch_coverage=1 01:13:01.523 --rc genhtml_function_coverage=1 01:13:01.523 --rc genhtml_legend=1 01:13:01.523 --rc geninfo_all_blocks=1 01:13:01.523 --rc geninfo_unexecuted_blocks=1 01:13:01.523 01:13:01.523 ' 01:13:01.523 11:24:02 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:13:01.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:01.523 --rc genhtml_branch_coverage=1 01:13:01.523 --rc genhtml_function_coverage=1 01:13:01.523 --rc genhtml_legend=1 01:13:01.523 --rc geninfo_all_blocks=1 01:13:01.523 --rc geninfo_unexecuted_blocks=1 01:13:01.523 01:13:01.523 ' 01:13:01.523 11:24:02 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:13:01.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:01.523 --rc genhtml_branch_coverage=1 01:13:01.523 --rc genhtml_function_coverage=1 01:13:01.523 --rc genhtml_legend=1 01:13:01.523 --rc geninfo_all_blocks=1 01:13:01.523 --rc geninfo_unexecuted_blocks=1 01:13:01.523 01:13:01.523 ' 01:13:01.523 11:24:02 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:13:01.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:01.523 --rc genhtml_branch_coverage=1 01:13:01.523 --rc genhtml_function_coverage=1 01:13:01.523 --rc genhtml_legend=1 01:13:01.523 --rc geninfo_all_blocks=1 01:13:01.523 --rc geninfo_unexecuted_blocks=1 01:13:01.523 01:13:01.523 ' 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:13:01.523 11:24:02 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:13:01.523 11:24:02 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:01.523 11:24:02 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:01.523 11:24:02 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:01.523 11:24:02 keyring_linux -- paths/export.sh@5 -- # export PATH 01:13:01.523 11:24:02 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@51 -- # : 0 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:13:01.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@733 -- # python - 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:13:01.523 /tmp/:spdk-test:key0 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:13:01.523 11:24:02 keyring_linux -- nvmf/common.sh@733 -- # python - 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:13:01.523 11:24:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:13:01.523 /tmp/:spdk-test:key1 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2666414 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 01:13:01.523 11:24:02 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2666414 01:13:01.523 11:24:02 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2666414 ']' 01:13:01.523 11:24:02 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:13:01.523 11:24:02 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:13:01.524 11:24:02 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:13:01.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:13:01.524 11:24:02 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:13:01.524 11:24:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:13:01.782 [2024-12-09 11:24:02.723149] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:13:01.783 [2024-12-09 11:24:02.723232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666414 ] 01:13:01.783 [2024-12-09 11:24:02.851895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:13:01.783 [2024-12-09 11:24:02.906494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:13:02.042 11:24:03 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:13:02.042 11:24:03 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:13:02.042 11:24:03 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:13:02.042 11:24:03 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:02.042 11:24:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:13:02.042 [2024-12-09 11:24:03.155707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:13:02.042 null0 01:13:02.042 [2024-12-09 11:24:03.187740] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:13:02.042 [2024-12-09 11:24:03.188116] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:13:02.042 11:24:03 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:02.042 11:24:03 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:13:02.042 780818795 01:13:02.042 11:24:03 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:13:02.042 500623158 01:13:02.042 11:24:03 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2666580 01:13:02.042 11:24:03 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2666580 /var/tmp/bperf.sock 01:13:02.042 11:24:03 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:13:02.042 11:24:03 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2666580 ']' 01:13:02.043 11:24:03 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:13:02.043 11:24:03 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:13:02.043 11:24:03 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:13:02.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:13:02.043 11:24:03 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:13:02.043 11:24:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:13:02.302 [2024-12-09 11:24:03.246803] Starting SPDK v25.01-pre git sha1 b920049a1 / DPDK 24.03.0 initialization... 01:13:02.302 [2024-12-09 11:24:03.246858] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666580 ] 01:13:02.302 [2024-12-09 11:24:03.325304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:13:02.302 [2024-12-09 11:24:03.367917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:13:02.302 11:24:03 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:13:02.302 11:24:03 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:13:02.302 11:24:03 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:13:02.302 11:24:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:13:02.562 11:24:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:13:02.562 11:24:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:13:03.131 11:24:04 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:13:03.131 11:24:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:13:03.131 [2024-12-09 11:24:04.293407] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:13:03.390 nvme0n1 01:13:03.390 11:24:04 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:13:03.390 11:24:04 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:13:03.390 11:24:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:13:03.390 11:24:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:13:03.390 11:24:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:13:03.390 11:24:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:13:03.649 11:24:04 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:13:03.649 11:24:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:13:03.649 11:24:04 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:13:03.649 11:24:04 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:13:03.649 11:24:04 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:13:03.649 11:24:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:13:03.649 11:24:04 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:13:03.909 11:24:04 keyring_linux -- keyring/linux.sh@25 -- # sn=780818795 01:13:03.909 11:24:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:13:03.909 11:24:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:13:03.909 11:24:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 780818795 == \7\8\0\8\1\8\7\9\5 ]] 01:13:03.909 11:24:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 780818795 01:13:03.909 11:24:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:13:03.909 11:24:04 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:13:03.909 Running I/O for 1 seconds... 01:13:05.285 13047.00 IOPS, 50.96 MiB/s 01:13:05.285 Latency(us) 01:13:05.285 [2024-12-09T10:24:06.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:05.286 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:13:05.286 nvme0n1 : 1.01 13047.72 50.97 0.00 0.00 9759.35 2436.23 11226.60 01:13:05.286 [2024-12-09T10:24:06.462Z] =================================================================================================================== 01:13:05.286 [2024-12-09T10:24:06.462Z] Total : 13047.72 50.97 0.00 0.00 9759.35 2436.23 11226.60 01:13:05.286 { 01:13:05.286 "results": [ 01:13:05.286 { 01:13:05.286 "job": "nvme0n1", 01:13:05.286 "core_mask": "0x2", 01:13:05.286 "workload": "randread", 01:13:05.286 "status": "finished", 01:13:05.286 "queue_depth": 128, 01:13:05.286 "io_size": 4096, 01:13:05.286 "runtime": 1.009755, 01:13:05.286 "iops": 13047.719496313463, 01:13:05.286 "mibps": 50.96765428247446, 01:13:05.286 "io_failed": 0, 01:13:05.286 "io_timeout": 0, 01:13:05.286 "avg_latency_us": 9759.347967725435, 01:13:05.286 "min_latency_us": 2436.229565217391, 01:13:05.286 "max_latency_us": 11226.601739130434 01:13:05.286 } 01:13:05.286 ], 01:13:05.286 "core_count": 1 01:13:05.286 } 01:13:05.286 11:24:06 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:13:05.286 11:24:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:13:05.286 11:24:06 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:13:05.286 11:24:06 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:13:05.286 11:24:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:13:05.286 11:24:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:13:05.286 11:24:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:13:05.286 11:24:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:13:05.547 11:24:06 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:13:05.547 11:24:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:13:05.547 11:24:06 keyring_linux -- keyring/linux.sh@23 -- # return 01:13:05.547 11:24:06 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:13:05.547 11:24:06 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 01:13:05.547 11:24:06 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:13:05.547 11:24:06 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:13:05.547 11:24:06 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:13:05.547 11:24:06 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:13:05.547 11:24:06 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:13:05.547 11:24:06 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:13:05.547 11:24:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:13:05.807 [2024-12-09 11:24:06.950054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:13:05.807 [2024-12-09 11:24:06.950896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e94790 (107): Transport endpoint is not connected 01:13:05.807 [2024-12-09 11:24:06.951891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e94790 (9): Bad file descriptor 01:13:05.807 [2024-12-09 11:24:06.952892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:13:05.807 [2024-12-09 11:24:06.952911] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:13:05.807 [2024-12-09 11:24:06.952921] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:13:05.807 [2024-12-09 11:24:06.952933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:13:05.807 request: 01:13:05.807 { 01:13:05.807 "name": "nvme0", 01:13:05.807 "trtype": "tcp", 01:13:05.807 "traddr": "127.0.0.1", 01:13:05.807 "adrfam": "ipv4", 01:13:05.807 "trsvcid": "4420", 01:13:05.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:13:05.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:13:05.807 "prchk_reftag": false, 01:13:05.807 "prchk_guard": false, 01:13:05.807 "hdgst": false, 01:13:05.807 "ddgst": false, 01:13:05.807 "psk": ":spdk-test:key1", 01:13:05.807 "allow_unrecognized_csi": false, 01:13:05.807 "method": "bdev_nvme_attach_controller", 01:13:05.807 "req_id": 1 01:13:05.807 } 01:13:05.807 Got JSON-RPC error response 01:13:05.807 response: 01:13:05.807 { 01:13:05.807 "code": -5, 01:13:05.807 "message": "Input/output error" 01:13:05.807 } 01:13:05.807 11:24:06 keyring_linux -- common/autotest_common.sh@655 -- # es=1 01:13:05.807 11:24:06 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:13:05.807 11:24:06 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:13:05.807 11:24:06 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:13:05.807 11:24:06 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:13:05.807 11:24:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:13:05.807 11:24:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:13:05.807 11:24:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:13:05.807 11:24:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:13:05.807 11:24:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:13:05.807 11:24:06 keyring_linux -- keyring/linux.sh@33 -- # sn=780818795 01:13:05.807 11:24:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 780818795 01:13:05.807 1 links removed 01:13:05.807 11:24:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:13:06.067 11:24:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:13:06.067 11:24:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:13:06.067 11:24:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:13:06.067 11:24:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:13:06.067 11:24:06 keyring_linux -- keyring/linux.sh@33 -- # sn=500623158 01:13:06.067 11:24:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 500623158 01:13:06.067 1 links removed 01:13:06.067 11:24:06 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2666580 01:13:06.067 11:24:06 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2666580 ']' 01:13:06.067 11:24:06 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2666580 01:13:06.067 11:24:06 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:13:06.067 11:24:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:06.067 11:24:07 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2666580 01:13:06.067 11:24:07 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:13:06.067 11:24:07 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:13:06.067 11:24:07 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2666580' 01:13:06.067 killing process with pid 2666580 01:13:06.067 11:24:07 keyring_linux -- common/autotest_common.sh@973 -- # kill 2666580 01:13:06.067 Received shutdown signal, test time was about 1.000000 seconds 01:13:06.067 01:13:06.067 Latency(us) 01:13:06.067 [2024-12-09T10:24:07.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:06.067 [2024-12-09T10:24:07.243Z] =================================================================================================================== 01:13:06.067 [2024-12-09T10:24:07.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:13:06.067 11:24:07 keyring_linux -- common/autotest_common.sh@978 -- # wait 2666580 01:13:06.326 11:24:07 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2666414 01:13:06.326 11:24:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2666414 ']' 01:13:06.326 11:24:07 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2666414 01:13:06.326 11:24:07 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:13:06.326 11:24:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:06.326 11:24:07 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2666414 01:13:06.326 11:24:07 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:13:06.326 11:24:07 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:13:06.326 11:24:07 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2666414' 01:13:06.326 killing process with pid 2666414 01:13:06.326 11:24:07 keyring_linux -- common/autotest_common.sh@973 -- # kill 2666414 01:13:06.326 11:24:07 keyring_linux -- common/autotest_common.sh@978 -- # wait 2666414 01:13:06.894 01:13:06.894 real 0m5.508s 01:13:06.894 user 0m10.673s 01:13:06.894 sys 0m1.802s 01:13:06.894 11:24:07 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 01:13:06.894 11:24:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:13:06.894 ************************************ 01:13:06.894 END TEST keyring_linux 01:13:06.894 ************************************ 01:13:06.894 11:24:07 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:13:06.894 11:24:07 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:13:06.894 11:24:07 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:13:06.894 11:24:07 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:13:06.894 11:24:07 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:13:06.894 11:24:07 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:13:06.894 11:24:07 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:13:06.894 11:24:07 -- common/autotest_common.sh@726 -- # xtrace_disable 01:13:06.894 11:24:07 -- common/autotest_common.sh@10 -- # set +x 01:13:06.894 11:24:07 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:13:06.894 11:24:07 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:13:06.894 11:24:07 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:13:06.894 11:24:07 -- common/autotest_common.sh@10 -- # set +x 01:13:12.166 INFO: APP EXITING 01:13:12.166 INFO: killing all VMs 01:13:12.166 INFO: killing vhost app 01:13:12.166 WARN: no vhost pid file found 01:13:12.166 INFO: EXIT DONE 01:13:14.700 0000:5e:00.0 (8086 0a54): Already using the nvme driver 01:13:14.700 0000:00:04.7 (8086 2021): Already using the ioatdma driver 01:13:14.700 0000:00:04.6 (8086 2021): Already using the ioatdma driver 01:13:14.958 0000:00:04.5 (8086 2021): Already using the ioatdma driver 01:13:14.958 0000:00:04.4 (8086 2021): Already using the ioatdma driver 01:13:14.958 0000:00:04.3 (8086 2021): Already using the ioatdma driver 01:13:14.958 0000:00:04.2 (8086 2021): Already using the ioatdma driver 01:13:14.958 0000:00:04.1 (8086 2021): Already using the ioatdma driver 01:13:14.958 0000:00:04.0 (8086 2021): Already using the ioatdma driver 01:13:14.958 0000:80:04.7 (8086 2021): Already using the ioatdma driver 01:13:14.958 0000:80:04.6 (8086 2021): Already using the ioatdma driver 01:13:14.958 0000:80:04.5 (8086 2021): Already using the ioatdma driver 01:13:14.959 0000:80:04.4 (8086 2021): Already using the ioatdma driver 01:13:14.959 0000:80:04.3 (8086 2021): Already using the ioatdma driver 01:13:15.217 0000:80:04.2 (8086 2021): Already using the ioatdma driver 01:13:15.218 0000:80:04.1 (8086 2021): Already using the ioatdma driver 01:13:15.218 0000:80:04.0 (8086 2021): Already using the ioatdma driver 01:13:18.510 Cleaning 01:13:18.510 Removing: /var/run/dpdk/spdk0/config 01:13:18.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:13:18.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:13:18.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:13:18.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:13:18.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 01:13:18.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 01:13:18.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 01:13:18.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 01:13:18.510 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:13:18.510 Removing: /var/run/dpdk/spdk0/hugepage_info 01:13:18.510 Removing: /var/run/dpdk/spdk1/config 01:13:18.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:13:18.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:13:18.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:13:18.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:13:18.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 01:13:18.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 01:13:18.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 01:13:18.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 01:13:18.511 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:13:18.511 Removing: /var/run/dpdk/spdk1/hugepage_info 01:13:18.511 Removing: /var/run/dpdk/spdk2/config 01:13:18.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:13:18.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:13:18.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:13:18.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:13:18.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 01:13:18.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 01:13:18.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 01:13:18.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 01:13:18.511 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:13:18.511 Removing: /var/run/dpdk/spdk2/hugepage_info 01:13:18.511 Removing: /var/run/dpdk/spdk3/config 01:13:18.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:13:18.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:13:18.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:13:18.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:13:18.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 01:13:18.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 01:13:18.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 01:13:18.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 01:13:18.511 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:13:18.511 Removing: /var/run/dpdk/spdk3/hugepage_info 01:13:18.511 Removing: /var/run/dpdk/spdk4/config 01:13:18.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:13:18.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:13:18.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:13:18.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:13:18.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 01:13:18.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 01:13:18.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 01:13:18.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 01:13:18.511 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:13:18.511 Removing: /var/run/dpdk/spdk4/hugepage_info 01:13:18.511 Removing: /dev/shm/bdev_svc_trace.1 01:13:18.511 Removing: /dev/shm/nvmf_trace.0 01:13:18.511 Removing: /dev/shm/spdk_tgt_trace.pid2234063 01:13:18.511 Removing: /var/run/dpdk/spdk0 01:13:18.511 Removing: /var/run/dpdk/spdk1 01:13:18.511 Removing: /var/run/dpdk/spdk2 01:13:18.511 Removing: /var/run/dpdk/spdk3 01:13:18.511 Removing: /var/run/dpdk/spdk4 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2231506 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2232671 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2234063 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2234599 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2235345 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2235527 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2236292 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2236347 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2236654 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2238423 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2239875 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2240282 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2240536 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2240956 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2241202 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2241405 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2241601 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2241833 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2242427 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2245166 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2245384 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2245694 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2245765 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2246332 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2246509 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2247007 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2247080 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2247298 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2247465 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2247677 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2247686 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2248153 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2248347 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2248683 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2252176 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2255989 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2265592 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2265966 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2269975 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2270181 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2274163 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2279403 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2281786 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2291072 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2298989 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2300529 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2301678 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2316480 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2320153 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2360105 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2364781 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2370075 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2376118 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2376124 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2376826 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2377527 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2378241 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2378605 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2378772 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2378952 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2379129 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2379145 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2379972 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2380677 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2381818 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2382182 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2382355 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2382531 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2383518 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2384472 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2391556 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2421237 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2425414 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2426660 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2428076 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2428260 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2428439 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2428621 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2429080 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2430461 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2431238 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2431785 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2433612 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2434002 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2434614 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2438939 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2443695 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2443697 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2443698 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2447258 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2455291 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2458469 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2463676 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2464722 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2465947 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2467181 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2471174 01:13:18.511 Removing: /var/run/dpdk/spdk_pid2475065 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2478809 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2485981 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2485995 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2490326 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2490510 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2490685 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2491038 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2491073 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2495466 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2495910 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2499857 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2501992 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2506916 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2511712 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2520443 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2526898 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2526901 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2544543 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2544948 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2545561 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2546021 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2546608 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2547144 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2547590 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2548052 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2551869 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2552127 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2557925 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2558019 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2562743 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2566393 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2574862 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2575390 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2579175 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2579403 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2583198 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2588252 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2590610 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2600071 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2607866 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2609198 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2609903 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2624298 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2627757 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2630037 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2637613 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2637720 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2642592 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2644183 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2645719 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2646608 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2648235 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2649165 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2656807 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2657288 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2657678 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2659926 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2660296 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2660762 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2664087 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2664217 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2665823 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2666414 01:13:18.771 Removing: /var/run/dpdk/spdk_pid2666580 01:13:18.771 Clean 01:13:19.031 11:24:20 -- common/autotest_common.sh@1453 -- # return 0 01:13:19.031 11:24:20 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:13:19.031 11:24:20 -- common/autotest_common.sh@732 -- # xtrace_disable 01:13:19.031 11:24:20 -- common/autotest_common.sh@10 -- # set +x 01:13:19.031 11:24:20 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:13:19.031 11:24:20 -- common/autotest_common.sh@732 -- # xtrace_disable 01:13:19.031 11:24:20 -- common/autotest_common.sh@10 -- # set +x 01:13:19.031 11:24:20 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 01:13:19.031 11:24:20 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 01:13:19.031 11:24:20 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 01:13:19.031 11:24:20 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:13:19.031 11:24:20 -- spdk/autotest.sh@398 -- # hostname 01:13:19.031 11:24:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-26 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 01:13:19.290 geninfo: WARNING: invalid characters removed from testname! 01:13:51.533 11:24:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:13:54.835 11:24:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:13:58.130 11:24:58 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:14:00.669 11:25:01 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:14:03.962 11:25:04 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:14:06.501 11:25:07 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:14:09.798 11:25:10 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:14:09.798 11:25:10 -- spdk/autorun.sh@1 -- $ timing_finish 01:14:09.799 11:25:10 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 01:14:09.799 11:25:10 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:14:09.799 11:25:10 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:14:09.799 11:25:10 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 01:14:09.799 + [[ -n 2154355 ]] 01:14:09.799 + sudo kill 2154355 01:14:09.809 [Pipeline] } 01:14:09.830 [Pipeline] // stage 01:14:09.836 [Pipeline] } 01:14:09.855 [Pipeline] // timeout 01:14:09.862 [Pipeline] } 01:14:09.884 [Pipeline] // catchError 01:14:09.890 [Pipeline] } 01:14:09.911 [Pipeline] // wrap 01:14:09.918 [Pipeline] } 01:14:09.942 [Pipeline] // catchError 01:14:09.954 [Pipeline] stage 01:14:09.956 [Pipeline] { (Epilogue) 01:14:09.972 [Pipeline] catchError 01:14:09.974 [Pipeline] { 01:14:09.989 [Pipeline] echo 01:14:09.990 Cleanup processes 01:14:09.997 [Pipeline] sh 01:14:10.285 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:14:10.285 2680809 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:14:10.300 [Pipeline] sh 01:14:10.588 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:14:10.588 ++ grep -v 'sudo pgrep' 01:14:10.588 ++ awk '{print $1}' 01:14:10.588 + sudo kill -9 01:14:10.588 + true 01:14:10.601 [Pipeline] sh 01:14:10.891 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:14:29.000 [Pipeline] sh 01:14:29.288 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:14:29.288 Artifacts sizes are good 01:14:29.303 [Pipeline] archiveArtifacts 01:14:29.310 Archiving artifacts 01:14:29.681 [Pipeline] sh 01:14:29.970 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 01:14:29.985 [Pipeline] cleanWs 01:14:29.995 [WS-CLEANUP] Deleting project workspace... 01:14:29.995 [WS-CLEANUP] Deferred wipeout is used... 01:14:30.002 [WS-CLEANUP] done 01:14:30.003 [Pipeline] } 01:14:30.079 [Pipeline] // catchError 01:14:30.085 [Pipeline] sh 01:14:30.426 + logger -p user.info -t JENKINS-CI 01:14:30.436 [Pipeline] } 01:14:30.454 [Pipeline] // stage 01:14:30.459 [Pipeline] } 01:15:41.446 Cancelling nested steps due to timeout 01:15:41.456 [Pipeline] } 01:15:41.477 [Pipeline] // stage 01:15:41.486 [Pipeline] } 01:15:41.507 [Pipeline] // timeout 01:15:41.516 [Pipeline] } 01:15:41.521 Timeout has been exceeded 01:15:41.522 java.io.IOException: cannot start writing logs to a finished node StepAtomNode[id=61, exec=CpsFlowExecution[Owner[nvmf-tcp-phy-autotest/88726:nvmf-tcp-phy-autotest #88726]]] sh in CpsFlowExecution[Owner[nvmf-tcp-phy-autotest/88726:nvmf-tcp-phy-autotest #88726]] 01:15:41.522 at PluginClassLoader for workflow-support//org.jenkinsci.plugins.workflow.support.DefaultStepContext.getListener(DefaultStepContext.java:124) 01:15:41.522 at PluginClassLoader for workflow-support//org.jenkinsci.plugins.workflow.support.DefaultStepContext.get(DefaultStepContext.java:87) 01:15:41.522 at PluginClassLoader for workflow-support//org.jenkinsci.plugins.workflow.support.DefaultStepContext.makeLauncher(DefaultStepContext.java:163) 01:15:41.522 at PluginClassLoader for workflow-support//org.jenkinsci.plugins.workflow.support.DefaultStepContext.get(DefaultStepContext.java:83) 01:15:41.522 at PluginClassLoader for workflow-durable-task-step//org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.launcher(DurableTaskStep.java:490) 01:15:41.522 at PluginClassLoader for workflow-durable-task-step//org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.stop(DurableTaskStep.java:519) 01:15:41.522 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsThread.stop(CpsThread.java:315) 01:15:41.522 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsBodyExecution$2.onSuccess(CpsBodyExecution.java:265) 01:15:41.522 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsBodyExecution$2.onSuccess(CpsBodyExecution.java:249) 01:15:41.522 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsFlowExecution$4$1.run(CpsFlowExecution.java:995) 01:15:41.522 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService.lambda$wrap$2(CpsVmExecutorService.java:85) 01:15:41.522 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 01:15:41.522 at java.base/java.util.concurrent.FutureTask.run(Unknown Source) 01:15:41.522 at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139) 01:15:41.522 at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) 01:15:41.522 at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68) 01:15:41.522 at jenkins.util.ErrorLoggingExecutorService.lambda$wrap$0(ErrorLoggingExecutorService.java:51) 01:15:41.522 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 01:15:41.522 at java.base/java.util.concurrent.FutureTask.run(Unknown Source) 01:15:41.522 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 01:15:41.522 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 01:15:41.522 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$1.call(CpsVmExecutorService.java:53) 01:15:41.522 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$1.call(CpsVmExecutorService.java:50) 01:15:41.522 at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136) 01:15:41.522 at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275) 01:15:41.522 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService.lambda$categoryThreadFactory$0(CpsVmExecutorService.java:50) 01:15:41.522 at java.base/java.lang.Thread.run(Unknown Source) 01:15:41.522 org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 2a6ce606-2af2-4548-add3-e7af7f04f45d 01:15:41.522 Setting overall build result to ABORTED 01:15:41.539 [Pipeline] // catchError 01:15:41.544 [Pipeline] } 01:15:41.559 [Pipeline] // wrap 01:15:41.564 [Pipeline] } 01:15:41.578 [Pipeline] // catchError 01:15:41.595 [Pipeline] stage 01:15:41.598 [Pipeline] { (Epilogue) 01:15:41.612 [Pipeline] catchError 01:15:41.614 [Pipeline] { 01:15:41.627 [Pipeline] echo 01:15:41.629 Cleanup processes 01:15:41.635 [Pipeline] sh 01:15:42.101 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:15:42.101 957189 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:15:42.115 [Pipeline] sh 01:15:42.394 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:15:42.395 ++ grep -v 'sudo pgrep' 01:15:42.395 ++ awk '{print $1}' 01:15:42.395 + sudo kill -9 01:15:42.395 + true 01:15:42.405 [Pipeline] sh 01:15:42.687 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:15:54.925 [Pipeline] sh 01:15:55.208 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:15:55.208 Artifacts sizes are good 01:15:55.222 [Pipeline] archiveArtifacts 01:15:55.229 Archiving artifacts 01:15:55.569 [Pipeline] sh 01:15:55.850 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 01:15:55.864 [Pipeline] cleanWs 01:15:55.873 [WS-CLEANUP] Deleting project workspace... 01:15:55.873 [WS-CLEANUP] Deferred wipeout is used... 01:15:55.882 [WS-CLEANUP] done 01:15:55.884 [Pipeline] } 01:15:55.904 [Pipeline] // catchError 01:15:55.914 [Pipeline] echo 01:15:55.916 Tests finished with errors. Please check the logs for more info. 01:15:55.919 [Pipeline] echo 01:15:55.921 Execution node will be rebooted. 01:15:55.936 [Pipeline] build 01:15:55.940 Scheduling project: reset-job 01:15:55.955 [Pipeline] sh 01:15:56.236 + logger -p user.info -t JENKINS-CI 01:15:56.245 [Pipeline] } 01:15:56.259 [Pipeline] // stage 01:15:56.264 [Pipeline] } 01:15:56.272 [Pipeline] // node 01:15:56.276 [Pipeline] // node 01:15:56.286 [Pipeline] End of Pipeline 01:15:56.316 Also: org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: e1b876b6-6096-4722-bc55-6eefac3c538e 01:15:56.316 java.lang.NullPointerException: Cannot read field "lease" because "c" is null 01:15:56.316 at PluginClassLoader for workflow-durable-task-step//org.jenkinsci.plugins.workflow.support.steps.ExecutorStepDynamicContext$WorkspaceListLeaseTranslator.get(ExecutorStepDynamicContext.java:210) 01:15:56.316 at PluginClassLoader for workflow-durable-task-step//org.jenkinsci.plugins.workflow.support.steps.ExecutorStepExecution$PlaceholderTask$Callback.finished(ExecutorStepExecution.java:996) 01:15:56.316 at PluginClassLoader for workflow-step-api//org.jenkinsci.plugins.workflow.steps.BodyExecutionCallback$TailCall.onSuccess(BodyExecutionCallback.java:119) 01:15:56.316 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsBodyExecution$SuccessAdapter.receive(CpsBodyExecution.java:381) 01:15:56.316 at PluginClassLoader for workflow-cps//com.cloudbees.groovy.cps.Outcome.resumeFrom(Outcome.java:70) 01:15:56.316 at PluginClassLoader for workflow-cps//com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:144) 01:15:56.316 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:17) 01:15:56.316 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:49) 01:15:56.316 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:180) 01:15:56.316 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:423) 01:15:56.316 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:331) 01:15:56.316 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:295) 01:15:56.317 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService.lambda$wrap$4(CpsVmExecutorService.java:140) 01:15:56.317 at java.base/java.util.concurrent.FutureTask.run(Unknown Source) 01:15:56.317 at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139) 01:15:56.317 at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) 01:15:56.317 at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68) 01:15:56.317 at jenkins.util.ErrorLoggingExecutorService.lambda$wrap$0(ErrorLoggingExecutorService.java:51) 01:15:56.317 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 01:15:56.317 at java.base/java.util.concurrent.FutureTask.run(Unknown Source) 01:15:56.317 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 01:15:56.317 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 01:15:56.317 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$1.call(CpsVmExecutorService.java:53) 01:15:56.317 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$1.call(CpsVmExecutorService.java:50) 01:15:56.317 at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136) 01:15:56.317 at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275) 01:15:56.317 at PluginClassLoader for workflow-cps//org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService.lambda$categoryThreadFactory$0(CpsVmExecutorService.java:50) 01:15:56.317 at java.base/java.lang.Thread.run(Unknown Source) 01:15:56.320 Finished: ABORTED